Sample records for fixed problem size

  1. Scalable problems and memory bounded speedup

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Ni, Lionel M.

    1992-01-01

    In this paper three models of parallel speedup are studied. They are fixed-size speedup, fixed-time speedup and memory-bounded speedup. The latter two consider the relationship between speedup and problem scalability. Two sets of speedup formulations are derived for these three models. One set considers uneven workload allocation and communication overhead and gives more accurate estimation. Another set considers a simplified case and provides a clear picture on the impact of the sequential portion of an application on the possible performance gain from parallel processing. The simplified fixed-size speedup is Amdahl's law. The simplified fixed-time speedup is Gustafson's scaled speedup. The simplified memory-bounded speedup contains both Amdahl's law and Gustafson's scaled speedup as special cases. This study leads to a better understanding of parallel processing.

  2. libFLASM: a software library for fixed-length approximate string matching.

    PubMed

    Ayad, Lorraine A K; Pissis, Solon P P; Retha, Ahmad

    2016-11-10

    Approximate string matching is the problem of finding all factors of a given text that are at a distance at most k from a given pattern. Fixed-length approximate string matching is the problem of finding all factors of a text of length n that are at a distance at most k from any factor of length ℓ of a pattern of length m. There exist bit-vector techniques to solve the fixed-length approximate string matching problem in time [Formula: see text] and space [Formula: see text] under the edit and Hamming distance models, where w is the size of the computer word; as such these techniques are independent of the distance threshold k or the alphabet size. Fixed-length approximate string matching is a generalisation of approximate string matching and, hence, has numerous direct applications in computational molecular biology and elsewhere. We present and make available libFLASM, a free open-source C++ software library for solving fixed-length approximate string matching under both the edit and the Hamming distance models. Moreover we describe how fixed-length approximate string matching is applied to solve real problems by incorporating libFLASM into established applications for multiple circular sequence alignment as well as single and structured motif extraction. Specifically, we describe how it can be used to improve the accuracy of multiple circular sequence alignment in terms of the inferred likelihood-based phylogenies; and we also describe how it is used to efficiently find motifs in molecular sequences representing regulatory or functional regions. The comparison of the performance of the library to other algorithms show how it is competitive, especially with increasing distance thresholds. Fixed-length approximate string matching is a generalisation of the classic approximate string matching problem. We present libFLASM, a free open-source C++ software library for solving fixed-length approximate string matching. The extensive experimental results presented here suggest that other applications could benefit from using libFLASM, and thus further maintenance and development of libFLASM is desirable.

  3. Fixation and chemical analysis of single fog and rain droplets

    NASA Astrophysics Data System (ADS)

    Kasahara, M.; Akashi, S.; Ma, C.-J.; Tohno, S.

    Last decade, the importance of global environmental problems has been recognized worldwide. Acid rain is one of the most important global environmental problems as well as the global warming. The grasp of physical and chemical properties of fog and rain droplets is essential to make clear the physical and chemical processes of acid rain and also their effects on forests, materials and ecosystems. We examined the physical and chemical properties of single fog and raindrops by applying fixation technique. The sampling method and treatment procedure to fix the liquid droplets as a solid particle were investigated. Small liquid particles like fog droplet could be easily fixed within few minutes by exposure to cyanoacrylate vapor. The large liquid particles like raindrops were also fixed successively, but some of them were not perfect. Freezing method was applied to fix the large raindrops. Frozen liquid particles existed stably by exposure to cyanoacrylate vapor after freezing. The particle size measurement and the elemental analysis of the fixed particle were performed in individual base using microscope, and SEX-EDX, particle-induced X-ray emission (PIXE) and micro-PIXE analyses, respectively. The concentration in raindrops was dependent upon the droplet size and the elapsed time from the beginning of rainfall.

  4. JPRS Report, China.

    DTIC Science & Technology

    1991-11-19

    grew 253 percent, net assets grew 87 vigorous debates among economists a few years ago, has percent, fixed assets grew 155 percent, and average been...although enterprises. they only account for 2.7 percent of all industrial enter- prises, they possess two-thirds of all fixed assess, account If we are to...large- ther fiscal problems are handled on an ad-hoc basis. A and medium-sized enterprises do not appear strong fixed base number in contracts sets taxes

  5. Optimal Portfolio Selection Under Concave Price Impact

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma Jin, E-mail: jinma@usc.edu; Song Qingshuo, E-mail: songe.qingshuo@cityu.edu.hk; Xu Jing, E-mail: xujing8023@yahoo.com.cn

    2013-06-15

    In this paper we study an optimal portfolio selection problem under instantaneous price impact. Based on some empirical analysis in the literature, we model such impact as a concave function of the trading size when the trading size is small. The price impact can be thought of as either a liquidity cost or a transaction cost, but the concavity nature of the cost leads to some fundamental difference from those in the existing literature. We show that the problem can be reduced to an impulse control problem, but without fixed cost, and that the value function is a viscosity solutionmore » to a special type of Quasi-Variational Inequality (QVI). We also prove directly (without using the solution to the QVI) that the optimal strategy exists and more importantly, despite the absence of a fixed cost, it is still in a 'piecewise constant' form, reflecting a more practical perspective.« less

  6. Development of Quadratic Programming Algorithm Based on Interior Point Method with Estimation Mechanism of Active Constraints

    NASA Astrophysics Data System (ADS)

    Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka

    Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.

  7. Social interaction as a heuristic for combinatorial optimization problems

    NASA Astrophysics Data System (ADS)

    Fontanari, José F.

    2010-11-01

    We investigate the performance of a variant of Axelrod’s model for dissemination of culture—the Adaptive Culture Heuristic (ACH)—on solving an NP-Complete optimization problem, namely, the classification of binary input patterns of size F by a Boolean Binary Perceptron. In this heuristic, N agents, characterized by binary strings of length F which represent possible solutions to the optimization problem, are fixed at the sites of a square lattice and interact with their nearest neighbors only. The interactions are such that the agents’ strings (or cultures) become more similar to the low-cost strings of their neighbors resulting in the dissemination of these strings across the lattice. Eventually the dynamics freezes into a homogeneous absorbing configuration in which all agents exhibit identical solutions to the optimization problem. We find through extensive simulations that the probability of finding the optimal solution is a function of the reduced variable F/N1/4 so that the number of agents must increase with the fourth power of the problem size, N∝F4 , to guarantee a fixed probability of success. In this case, we find that the relaxation time to reach an absorbing configuration scales with F6 which can be interpreted as the overall computational cost of the ACH to find an optimal set of weights for a Boolean binary perceptron, given a fixed probability of success.

  8. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    PubMed

    Almutairy, Meznah; Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  9. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches

    PubMed Central

    Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989

  10. A general optimality criteria algorithm for a class of engineering optimization problems

    NASA Astrophysics Data System (ADS)

    Belegundu, Ashok D.

    2015-05-01

    An optimality criteria (OC)-based algorithm for optimization of a general class of nonlinear programming (NLP) problems is presented. The algorithm is only applicable to problems where the objective and constraint functions satisfy certain monotonicity properties. For multiply constrained problems which satisfy these assumptions, the algorithm is attractive compared with existing NLP methods as well as prevalent OC methods, as the latter involve computationally expensive active set and step-size control strategies. The fixed point algorithm presented here is applicable not only to structural optimization problems but also to certain problems as occur in resource allocation and inventory models. Convergence aspects are discussed. The fixed point update or resizing formula is given physical significance, which brings out a strength and trim feature. The number of function evaluations remains independent of the number of variables, allowing the efficient solution of problems with large number of variables.

  11. Parameterized Complexity Results for General Factors in Bipartite Graphs with an Application to Constraint Programming

    NASA Astrophysics Data System (ADS)

    Gutin, Gregory; Kim, Eun Jung; Soleimanfallah, Arezou; Szeider, Stefan; Yeo, Anders

    The NP-hard general factor problem asks, given a graph and for each vertex a list of integers, whether the graph has a spanning subgraph where each vertex has a degree that belongs to its assigned list. The problem remains NP-hard even if the given graph is bipartite with partition U ⊎ V, and each vertex in U is assigned the list {1}; this subproblem appears in the context of constraint programming as the consistency problem for the extended global cardinality constraint. We show that this subproblem is fixed-parameter tractable when parameterized by the size of the second partite set V. More generally, we show that the general factor problem for bipartite graphs, parameterized by |V |, is fixed-parameter tractable as long as all vertices in U are assigned lists of length 1, but becomes W[1]-hard if vertices in U are assigned lists of length at most 2. We establish fixed-parameter tractability by reducing the problem instance to a bounded number of acyclic instances, each of which can be solved in polynomial time by dynamic programming.

  12. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  13. Simulating ground water-lake interactions: Approaches and insights

    USGS Publications Warehouse

    Hunt, R.J.; Haitjema, H.M.; Krohelski, J.T.; Feinstein, D.T.

    2003-01-01

    Approaches for modeling lake-ground water interactions have evolved significantly from early simulations that used fixed lake stages specified as constant head to sophisticated LAK packages for MODFLOW. Although model input can be complex, the LAK package capabilities and output are superior to methods that rely on a fixed lake stage and compare well to other simple methods where lake stage can be calculated. Regardless of the approach, guidelines presented here for model grid size, location of three-dimensional flow, and extent of vertical capture can facilitate the construction of appropriately detailed models that simulate important lake-ground water interactions without adding unnecessary complexity. In addition to MODFLOW approaches, lake simulation has been formulated in terms of analytic elements. The analytic element lake package had acceptable agreement with a published LAK1 problem, even though there were differences in the total lake conductance and number of layers used in the two models. The grid size used in the original LAK1 problem, however, violated a grid size guideline presented in this paper. Grid sensitivity analyses demonstrated that an appreciable discrepancy in the distribution of stream and lake flux was related to the large grid size used in the original LAK1 problem. This artifact is expected regardless of MODFLOW LAK package used. When the grid size was reduced, a finite-difference formulation approached the analytic element results. These insights and guidelines can help ensure that the proper lake simulation tool is being selected and applied.

  14. An optimized treatment for algorithmic differentiation of an important glaciological fixed-point problem

    DOE PAGES

    Goldberg, Daniel N.; Narayanan, Sri Hari Krishna; Hascoet, Laurent; ...

    2016-05-20

    We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enablingmore » larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. Finally, the methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.« less

  15. Temperature Scaling Law for Quantum Annealing Optimizers.

    PubMed

    Albash, Tameem; Martin-Mayor, Victor; Hen, Itay

    2017-09-15

    Physical implementations of quantum annealing unavoidably operate at finite temperatures. We point to a fundamental limitation of fixed finite temperature quantum annealers that prevents them from functioning as competitive scalable optimizers and show that to serve as optimizers annealer temperatures must be appropriately scaled down with problem size. We derive a temperature scaling law dictating that temperature must drop at the very least in a logarithmic manner but also possibly as a power law with problem size. We corroborate our results by experiment and simulations and discuss the implications of these to practical annealers.

  16. Alternative Parameterizations for Cluster Editing

    NASA Astrophysics Data System (ADS)

    Komusiewicz, Christian; Uhlmann, Johannes

    Given an undirected graph G and a nonnegative integer k, the NP-hard Cluster Editing problem asks whether G can be transformed into a disjoint union of cliques by applying at most k edge modifications. In the field of parameterized algorithmics, Cluster Editing has almost exclusively been studied parameterized by the solution size k. Contrastingly, in many real-world instances it can be observed that the parameter k is not really small. This observation motivates our investigation of parameterizations of Cluster Editing different from the solution size k. Our results are as follows. Cluster Editing is fixed-parameter tractable with respect to the parameter "size of a minimum cluster vertex deletion set of G", a typically much smaller parameter than k. Cluster Editing remains NP-hard on graphs with maximum degree six. A restricted but practically relevant version of Cluster Editing is fixed-parameter tractable with respect to the combined parameter "number of clusters in the target graph" and "maximum number of modified edges incident to any vertex in G". Many of our results also transfer to the NP-hard Cluster Deletion problem, where only edge deletions are allowed.

  17. A numerical analysis of phase-change problems including natural convection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Y.; Faghri, A.

    1990-08-01

    Fixed grid solutions for phase-change problems remove the need to satisfy conditions at the phase-change front and can be easily extended to multidimensional problems. The two most important and widely used methods are enthalpy methods and temperature-based equivalent heat capacity methods. Both methods in this group have advantages and disadvantages. Enthalpy methods (Shamsundar and Sparrow, 1975; Voller and Prakash, 1987; Cao et al., 1989) are flexible and can handle phase-change problems occurring both at a single temperature and over a temperature range. The drawback of this method is that although the predicted temperature distributions and melting fronts are reasonable, themore » predicted time history of the temperature at a typical grid point may have some oscillations. The temperature-based fixed grid methods (Morgan, 1981; Hsiao and Chung, 1984) have no such time history problems and are more convenient with conjugate problems involving an adjacent wall, but have to deal with the severe nonlinearity of the governing equations when the phase-change temperature range is small. In this paper, a new temperature-based fixed-grid formulation is proposed, and the reason that the original equivalent heat capacity model is subject to such restrictions on the time step, mesh size, and the phase-change temperature range will also be discussed.« less

  18. Unequal-area, fixed-shape facility layout problems using the firefly algorithm

    NASA Astrophysics Data System (ADS)

    Ingole, Supriya; Singh, Dinesh

    2017-07-01

    In manufacturing industries, the facility layout design is a very important task, as it is concerned with the overall manufacturing cost and profit of the industry. The facility layout problem (FLP) is solved by arranging the departments or facilities of known dimensions on the available floor space. The objective of this article is to implement the firefly algorithm (FA) for solving unequal-area, fixed-shape FLPs and optimizing the costs of total material handling and transportation between the facilities. The FA is a nature-inspired algorithm and can be used for combinatorial optimization problems. Benchmark problems from the previous literature are solved using the FA. To check its effectiveness, it is implemented to solve large-sized FLPs. Computational results obtained using the FA show that the algorithm is less time consuming and the total layout costs for FLPs are better than the best results achieved so far.

  19. Computational Study for Planar Connected Dominating Set Problem

    NASA Astrophysics Data System (ADS)

    Marzban, Marjan; Gu, Qian-Ping; Jia, Xiaohua

    The connected dominating set (CDS) problem is a well studied NP-hard problem with many important applications. Dorn et al. [ESA2005, LNCS3669,pp95-106] introduce a new technique to generate 2^{O(sqrt{n})} time and fixed-parameter algorithms for a number of non-local hard problems, including the CDS problem in planar graphs. The practical performance of this algorithm is yet to be evaluated. We perform a computational study for such an evaluation. The results show that the size of instances can be solved by the algorithm mainly depends on the branchwidth of the instances, coinciding with the theoretical result. For graphs with small or moderate branchwidth, the CDS problem instances with size up to a few thousands edges can be solved in a practical time and memory space. This suggests that the branch-decomposition based algorithms can be practical for the planar CDS problem.

  20. Grouping in decomposition method for multi-item capacitated lot-sizing problem with immediate lost sales and joint and item-dependent setup cost

    NASA Astrophysics Data System (ADS)

    Narenji, M.; Fatemi Ghomi, S. M. T.; Nooraie, S. V. R.

    2011-03-01

    This article examines a dynamic and discrete multi-item capacitated lot-sizing problem in a completely deterministic production or procurement environment with limited production/procurement capacity where lost sales (the loss of customer demand) are permitted. There is no inventory space capacity and the production activity incurs a fixed charge linear cost function. Similarly, the inventory holding cost and the cost of lost demand are both associated with a linear no-fixed charge function. For the sake of simplicity, a unit of each item is assumed to consume one unit of production/procurement capacity. We analyse a different version of setup costs incurred by a production or procurement activity in a given period of the planning horizon. In this version, called the joint and item-dependent setup cost, an additional item-dependent setup cost is incurred separately for each produced or ordered item on top of the joint setup cost.

  1. An investigation of messy genetic algorithms

    NASA Technical Reports Server (NTRS)

    Goldberg, David E.; Deb, Kalyanmoy; Korb, Bradley

    1990-01-01

    Genetic algorithms (GAs) are search procedures based on the mechanics of natural selection and natural genetics. They combine the use of string codings or artificial chromosomes and populations with the selective and juxtapositional power of reproduction and recombination to motivate a surprisingly powerful search heuristic in many problems. Despite their empirical success, there has been a long standing objection to the use of GAs in arbitrarily difficult problems. A new approach was launched. Results to a 30-bit, order-three-deception problem were obtained using a new type of genetic algorithm called a messy genetic algorithm (mGAs). Messy genetic algorithms combine the use of variable-length strings, a two-phase selection scheme, and messy genetic operators to effect a solution to the fixed-coding problem of standard simple GAs. The results of the study of mGAs in problems with nonuniform subfunction scale and size are presented. The mGA approach is summarized, both its operation and the theory of its use. Experiments on problems of varying scale, varying building-block size, and combined varying scale and size are presented.

  2. Divergent estimation error in portfolio optimization and in linear regression

    NASA Astrophysics Data System (ADS)

    Kondor, I.; Varga-Haszonits, I.

    2008-08-01

    The problem of estimation error in portfolio optimization is discussed, in the limit where the portfolio size N and the sample size T go to infinity such that their ratio is fixed. The estimation error strongly depends on the ratio N/T and diverges for a critical value of this parameter. This divergence is the manifestation of an algorithmic phase transition, it is accompanied by a number of critical phenomena, and displays universality. As the structure of a large number of multidimensional regression and modelling problems is very similar to portfolio optimization, the scope of the above observations extends far beyond finance, and covers a large number of problems in operations research, machine learning, bioinformatics, medical science, economics, and technology.

  3. Extended Islands of Tractability for Parsimony Haplotyping

    NASA Astrophysics Data System (ADS)

    Fleischer, Rudolf; Guo, Jiong; Niedermeier, Rolf; Uhlmann, Johannes; Wang, Yihui; Weller, Mathias; Wu, Xi

    Parsimony haplotyping is the problem of finding a smallest size set of haplotypes that can explain a given set of genotypes. The problem is NP-hard, and many heuristic and approximation algorithms as well as polynomial-time solvable special cases have been discovered. We propose improved fixed-parameter tractability results with respect to the parameter "size of the target haplotype set" k by presenting an O *(k 4k )-time algorithm. This also applies to the practically important constrained case, where we can only use haplotypes from a given set. Furthermore, we show that the problem becomes polynomial-time solvable if the given set of genotypes is complete, i.e., contains all possible genotypes that can be explained by the set of haplotypes.

  4. Multi-vehicle mobility allowance shuttle transit (MAST) system : an analytical model to select the fleet size and a scheduling heuristic.

    DOT National Transportation Integrated Search

    2012-06-01

    The mobility allowance shuttle transit (MAST) system is a hybrid transit system in which vehicles are : allowed to deviate from a fixed route to serve flexible demand. A mixed integer programming (MIP) : formulation for the static scheduling problem ...

  5. A System Gone Berserk: How Are Zero-Tolerance Policies Really Affecting Schools?

    ERIC Educational Resources Information Center

    Martinez, Stephanie

    2009-01-01

    School administrators continue to use zero-tolerance policies as a one-size-fits-all, quick-fix solution to curbing discipline problems with students. Originally intended to address serious offenses such as possession of firearms, zero-tolerance policies are also now meant to address fighting and disrespect. Despite the seeming popularity of…

  6. Combined Simulated Annealing and Genetic Algorithm Approach to Bus Network Design

    NASA Astrophysics Data System (ADS)

    Liu, Li; Olszewski, Piotr; Goh, Pong-Chai

    A new method - combined simulated annealing (SA) and genetic algorithm (GA) approach is proposed to solve the problem of bus route design and frequency setting for a given road network with fixed bus stop locations and fixed travel demand. The method involves two steps: a set of candidate routes is generated first and then the best subset of these routes is selected by the combined SA and GA procedure. SA is the main process to search for a better solution to minimize the total system cost, comprising user and operator costs. GA is used as a sub-process to generate new solutions. Bus demand assignment on two alternative paths is performed at the solution evaluation stage. The method was implemented on four theoretical grid networks of different size and a benchmark network. Several GA operators (crossover and mutation) were utilized and tested for their effectiveness. The results show that the proposed method can efficiently converge to the optimal solution on a small network but computation time increases significantly with network size. The method can also be used for other transport operation management problems.

  7. Forecast horizon of multi-item dynamic lot size model with perishable inventory.

    PubMed

    Jing, Fuying; Lan, Zirui

    2017-01-01

    This paper studies a multi-item dynamic lot size problem for perishable products where stock deterioration rates and inventory costs are age-dependent. We explore structural properties in an optimal solution under two cost structures and develop a dynamic programming algorithm to solve the problem in polynomial time when the number of products is fixed. We establish forecast horizon results that can help the operation manager to decide the precise forecast horizon in a rolling decision-making process. Finally, based on a detailed test bed of instance, we obtain useful managerial insights on the impact of deterioration rate and lifetime of products on the length of forecast horizon.

  8. Forecast horizon of multi-item dynamic lot size model with perishable inventory

    PubMed Central

    Jing, Fuying

    2017-01-01

    This paper studies a multi-item dynamic lot size problem for perishable products where stock deterioration rates and inventory costs are age-dependent. We explore structural properties in an optimal solution under two cost structures and develop a dynamic programming algorithm to solve the problem in polynomial time when the number of products is fixed. We establish forecast horizon results that can help the operation manager to decide the precise forecast horizon in a rolling decision-making process. Finally, based on a detailed test bed of instance, we obtain useful managerial insights on the impact of deterioration rate and lifetime of products on the length of forecast horizon. PMID:29125856

  9. Scaling fixed-field alternating gradient accelerators with a small orbit excursion.

    PubMed

    Machida, Shinji

    2009-10-16

    A novel scaling type of fixed-field alternating gradient (FFAG) accelerator is proposed that solves the major problems of conventional scaling and nonscaling types. This scaling FFAG accelerator can achieve a much smaller orbit excursion by taking a larger field index k. A triplet focusing structure makes it possible to set the operating point in the second stability region of Hill's equation with a reasonable sensitivity to various errors. The orbit excursion is about 5 times smaller than in a conventional scaling FFAG accelerator and the beam size growth due to typical errors is at most 10%.

  10. A Kernel-based Lagrangian method for imperfectly-mixed chemical reactions

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael J.; Pankavich, Stephen; Benson, David A.

    2017-05-01

    Current Lagrangian (particle-tracking) algorithms used to simulate diffusion-reaction equations must employ a certain number of particles to properly emulate the system dynamics-particularly for imperfectly-mixed systems. The number of particles is tied to the statistics of the initial concentration fields of the system at hand. Systems with shorter-range correlation and/or smaller concentration variance require more particles, potentially limiting the computational feasibility of the method. For the well-known problem of bimolecular reaction, we show that using kernel-based, rather than Dirac delta, particles can significantly reduce the required number of particles. We derive the fixed width of a Gaussian kernel for a given reduced number of particles that analytically eliminates the error between kernel and Dirac solutions at any specified time. We also show how to solve for the fixed kernel size by minimizing the squared differences between solutions over any given time interval. Numerical results show that the width of the kernel should be kept below about 12% of the domain size, and that the analytic equations used to derive kernel width suffer significantly from the neglect of higher-order moments. The simulations with a kernel width given by least squares minimization perform better than those made to match at one specific time. A heuristic time-variable kernel size, based on the previous results, performs on par with the least squares fixed kernel size.

  11. On Making a Distinguished Vertex Minimum Degree by Vertex Deletion

    NASA Astrophysics Data System (ADS)

    Betzler, Nadja; Bredereck, Robert; Niedermeier, Rolf; Uhlmann, Johannes

    For directed and undirected graphs, we study the problem to make a distinguished vertex the unique minimum-(in)degree vertex through deletion of a minimum number of vertices. The corresponding NP-hard optimization problems are motivated by applications concerning control in elections and social network analysis. Continuing previous work for the directed case, we show that the problem is W[2]-hard when parameterized by the graph's feedback arc set number, whereas it becomes fixed-parameter tractable when combining the parameters "feedback vertex set number" and "number of vertices to delete". For the so far unstudied undirected case, we show that the problem is NP-hard and W[1]-hard when parameterized by the "number of vertices to delete". On the positive side, we show fixed-parameter tractability for several parameterizations measuring tree-likeness, including a vertex-linear problem kernel with respect to the parameter "feedback edge set number". On the contrary, we show a non-existence result concerning polynomial-size problem kernels for the combined parameter "vertex cover number and number of vertices to delete", implying corresponding nonexistence results when replacing vertex cover number by treewidth or feedback vertex set number.

  12. School Bullying: Why Quick Fixes Do Not Prevent School Failure

    ERIC Educational Resources Information Center

    Casebeer, Cindy M.

    2012-01-01

    School bullying is a serious problem. It is associated with negative effects for bullies, targets, and bystanders. Bullying is related to school shootings, student suicides, and poor academic outcomes. Yet, this issue cannot be solved by way of simple, one-size-fits-all solutions. Instead, school bullying is a complex, systemic issue that requires…

  13. Vertical Object Layout and Compression for Fixed Heaps

    NASA Astrophysics Data System (ADS)

    Titzer, Ben L.; Palsberg, Jens

    Research into embedded sensor networks has placed increased focus on the problem of developing reliable and flexible software for microcontroller-class devices. Languages such as nesC [10] and Virgil [20] have brought higher-level programming idioms to this lowest layer of software, thereby adding expressiveness. Both languages are marked by the absence of dynamic memory allocation, which removes the need for a runtime system to manage memory. While nesC offers code modules with statically allocated fields, arrays and structs, Virgil allows the application to allocate and initialize arbitrary objects during compilation, producing a fixed object heap for runtime. This paper explores techniques for compressing fixed object heaps with the goal of reducing the RAM footprint of a program. We explore table-based compression and introduce a novel form of object layout called vertical object layout. We provide experimental results that measure the impact on RAM size, code size, and execution time for a set of Virgil programs. Our results show that compressed vertical layout has better execution time and code size than table-based compression while achieving more than 20% heap reduction on 6 of 12 benchmark programs and 2-17% heap reduction on the remaining 6. We also present a formalization of vertical object layout and prove tight relationships between three styles of object layout.

  14. Homogenization of Winkler-Steklov spectral conditions in three-dimensional linear elasticity

    NASA Astrophysics Data System (ADS)

    Gómez, D.; Nazarov, S. A.; Pérez, M. E.

    2018-04-01

    We consider a homogenization Winkler-Steklov spectral problem that consists of the elasticity equations for a three-dimensional homogeneous anisotropic elastic body which has a plane part of the surface subject to alternating boundary conditions on small regions periodically placed along the plane. These conditions are of the Dirichlet type and of the Winkler-Steklov type, the latter containing the spectral parameter. The rest of the boundary of the body is fixed, and the period and size of the regions, where the spectral parameter arises, are of order ɛ . For fixed ɛ , the problem has a discrete spectrum, and we address the asymptotic behavior of the eigenvalues {β _k^ɛ }_{k=1}^{∞} as ɛ → 0. We show that β _k^ɛ =O(ɛ ^{-1}) for each fixed k, and we observe a common limit point for all the rescaled eigenvalues ɛ β _k^ɛ while we make it evident that, although the periodicity of the structure only affects the boundary conditions, a band-gap structure of the spectrum is inherited asymptotically. Also, we provide the asymptotic behavior for certain "groups" of eigenmodes.

  15. Adaptive Framework for Classification and Novel Class Detection over Evolving Data Streams with Limited Labeled Data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haque, Ahsanul; Khan, Latifur; Baron, Michael

    2015-09-01

    Most approaches to classifying evolving data streams either divide the stream of data into fixed-size chunks or use gradual forgetting to address the problems of infinite length and concept drift. Finding the fixed size of the chunks or choosing a forgetting rate without prior knowledge about time-scale of change is not a trivial task. As a result, these approaches suffer from a trade-off between performance and sensitivity. To address this problem, we present a framework which uses change detection techniques on the classifier performance to determine chunk boundaries dynamically. Though this framework exhibits good performance, it is heavily dependent onmore » the availability of true labels of data instances. However, labeled data instances are scarce in realistic settings and not readily available. Therefore, we present a second framework which is unsupervised in nature, and exploits change detection on classifier confidence values to determine chunk boundaries dynamically. In this way, it avoids the use of labeled data while still addressing the problems of infinite length and concept drift. Moreover, both of our proposed frameworks address the concept evolution problem by detecting outliers having similar values for the attributes. We provide theoretical proof that our change detection method works better than other state-of-the-art approaches in this particular scenario. Results from experiments on various benchmark and synthetic data sets also show the efficiency of our proposed frameworks.« less

  16. Schools Funding in Georgia: Changes, Problems and Analysis

    ERIC Educational Resources Information Center

    Maglakelidze, Shorena; Giorgobiani, Zurab; Shukakidze, Berika

    2013-01-01

    There is no fixed rule about how financial resources must be directed to the education sector. It is quite clear that the size of investment in the sector well defines the quality of education students are offered. It is highly important to define the amount of money, which is needed for effective functioning of schools and it is also important to…

  17. Little Evidence That Time in Child Care Causes Externalizing Problems During Early Childhood in Norway

    PubMed Central

    Zachrisson, Henrik Daae; Dearing, Eric; Lekhal, Ratib; Toppelberg, Claudio O.

    2012-01-01

    Associations between maternal reports of hours in child care and children’s externalizing problems at 18 and 36 months of age were examined in a population-based Norwegian sample (n = 75,271). Within a sociopolitical context of homogenously high-quality child care, there was little evidence that high quantity of care causes externalizing problems. Using conventional approaches to handling selection bias and listwise deletion for substantial attrition in this sample, more hours in care predicted higher problem levels, yet with small effect sizes. The finding, however, was not robust to using multiple imputation for missing values. Moreover, when sibling and individual fixed-effects models for handling selection bias were used, no relation between hours and problems was evident. PMID:23311645

  18. Speeding up parallel processing

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    In 1967 Amdahl expressed doubts about the ultimate utility of multiprocessors. The formulation, now called Amdahl's law, became part of the computing folklore and has inspired much skepticism about the ability of the current generation of massively parallel processors to efficiently deliver all their computing power to programs. The widely publicized recent results of a group at Sandia National Laboratory, which showed speedup on a 1024 node hypercube of over 500 for three fixed size problems and over 1000 for three scalable problems, have convincingly challenged this bit of folklore and have given new impetus to parallel scientific computing.

  19. Rate-Compatible LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel

    2009-01-01

    A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation

  20. Improved belief propagation algorithm finds many Bethe states in the random-field Ising model on random graphs

    NASA Astrophysics Data System (ADS)

    Perugini, G.; Ricci-Tersenghi, F.

    2018-01-01

    We first present an empirical study of the Belief Propagation (BP) algorithm, when run on the random field Ising model defined on random regular graphs in the zero temperature limit. We introduce the notion of extremal solutions for the BP equations, and we use them to fix a fraction of spins in their ground state configuration. At the phase transition point the fraction of unconstrained spins percolates and their number diverges with the system size. This in turn makes the associated optimization problem highly non trivial in the critical region. Using the bounds on the BP messages provided by the extremal solutions we design a new and very easy to implement BP scheme which is able to output a large number of stable fixed points. On one hand this new algorithm is able to provide the minimum energy configuration with high probability in a competitive time. On the other hand we found that the number of fixed points of the BP algorithm grows with the system size in the critical region. This unexpected feature poses new relevant questions about the physics of this class of models.

  1. Cost effective campaigning in social networks

    NASA Astrophysics Data System (ADS)

    Kotnis, Bhushan; Kuri, Joy

    2016-05-01

    Campaigners are increasingly using online social networking platforms for promoting products, ideas and information. A popular method of promoting a product or even an idea is incentivizing individuals to evangelize the idea vigorously by providing them with referral rewards in the form of discounts, cash backs, or social recognition. Due to budget constraints on scarce resources such as money and manpower, it may not be possible to provide incentives for the entire population, and hence incentives need to be allocated judiciously to appropriate individuals for ensuring the highest possible outreach size. We aim to do the same by formulating and solving an optimization problem using percolation theory. In particular, we compute the set of individuals that are provided incentives for minimizing the expected cost while ensuring a given outreach size. We also solve the problem of computing the set of individuals to be incentivized for maximizing the outreach size for given cost budget. The optimization problem turns out to be non trivial; it involves quantities that need to be computed by numerically solving a fixed point equation. Our primary contribution is, that for a fairly general cost structure, we show that the optimization problems can be solved by solving a simple linear program. We believe that our approach of using percolation theory to formulate an optimization problem is the first of its kind.

  2. Quit Surfing and Start "Clicking": One Professor's Effort to Combat the Problems of Teaching the U.S. Survey in a Large Lecture Hall

    ERIC Educational Resources Information Center

    Cole, Stephanie

    2010-01-01

    Teaching an introductory survey course in a typical lecture hall presents a series of related obstacles. The large number of students, the size of the room, and the fixed nature of the seating tend to maximize the distance between instructor and students. That distance then grants enrolled students enough anonymity to skip class too frequently and…

  3. Fast Decentralized Averaging via Multi-scale Gossip

    NASA Astrophysics Data System (ADS)

    Tsianos, Konstantinos I.; Rabbat, Michael G.

    We are interested in the problem of computing the average consensus in a distributed fashion on random geometric graphs. We describe a new algorithm called Multi-scale Gossip which employs a hierarchical decomposition of the graph to partition the computation into tractable sub-problems. Using only pairwise messages of fixed size that travel at most O(n^{1/3}) hops, our algorithm is robust and has communication cost of O(n loglogn logɛ - 1) transmissions, which is order-optimal up to the logarithmic factor in n. Simulated experiments verify the good expected performance on graphs of many thousands of nodes.

  4. Plantar pressure cartography reconstruction from 3 sensors.

    PubMed

    Abou Ghaida, Hussein; Mottet, Serge; Goujon, Jean-Marc

    2014-01-01

    Foot problem diagnosis is often made by using pressure mapping systems, unfortunately located and used in the laboratories. In the context of e-health and telemedicine for home monitoring of patients having foot problems, our focus is to present an acceptable system for daily use. We developed an ambulatory instrumented insole using 3 pressures sensors to visualize plantar pressure cartographies. We show that a standard insole with fixed sensor position could be used for different foot sizes. The results show an average error measured at each pixel of 0.01 daN, with a standard deviation of 0.005 daN.

  5. A Decision-making Model for a Two-stage Production-delivery System in SCM Environment

    NASA Astrophysics Data System (ADS)

    Feng, Ding-Zhong; Yamashiro, Mitsuo

    A decision-making model is developed for an optimal production policy in a two-stage production-delivery system that incorporates a fixed quantity supply of finished goods to a buyer at a fixed interval of time. First, a general cost model is formulated considering both supplier (of raw materials) and buyer (of finished products) sides. Then an optimal solution to the problem is derived on basis of the cost model. Using the proposed model and its optimal solution, one can determine optimal production lot size for each stage, optimal number of transportation for semi-finished goods, and optimal quantity of semi-finished goods transported each time to meet the lumpy demand of consumers. Also, we examine the sensitivity of raw materials ordering and production lot size to changes in ordering cost, transportation cost and manufacturing setup cost. A pragmatic computation approach for operational situations is proposed to solve integer approximation solution. Finally, we give some numerical examples.

  6. A path-oriented knowledge representation system: Defusing the combinatorial system

    NASA Technical Reports Server (NTRS)

    Karamouzis, Stamos T.; Barry, John S.; Smith, Steven L.; Feyock, Stefan

    1995-01-01

    LIMAP is a programming system oriented toward efficient information manipulation over fixed finite domains, and quantification over paths and predicates. A generalization of Warshall's Algorithm to precompute paths in a sparse matrix representation of semantic nets is employed to allow questions involving paths between components to be posed and answered easily. LIMAP's ability to cache all paths between two components in a matrix cell proved to be a computational obstacle, however, when the semantic net grew to realistic size. The present paper describes a means of mitigating this combinatorial explosion to an extent that makes the use of the LIMAP representation feasible for problems of significant size. The technique we describe radically reduces the size of the search space in which LIMAP must operate; semantic nets of more than 500 nodes have been attacked successfully. Furthermore, it appears that the procedure described is applicable not only to LIMAP, but to a number of other combinatorially explosive search space problems found in AI as well.

  7. Fix and forget or fix and report: a qualitative study of tensions at the front line of incident reporting.

    PubMed

    Hewitt, Tanya Anne; Chreim, Samia

    2015-05-01

    Practitioners frequently encounter safety problems that they themselves can resolve on the spot. We ask: when faced with such a problem, do practitioners fix it in the moment and forget about it, or do they fix it in the moment and report it? We consider factors underlying these two approaches. We used a qualitative case study design employing in-depth interviews with 40 healthcare practitioners in a tertiary care hospital in Ontario, Canada. We conducted a thematic analysis, and compared the findings with the literature. 'Fixing and forgetting' was the main choice that most practitioners made in situations where they faced problems that they themselves could resolve. These situations included (A) handling near misses, which were seen as unworthy of reporting since they did not result in actual harm to the patient, (B) prioritising solving individual patients' safety problems, which were viewed as unique or one-time events and (C) encountering re-occurring safety problems, which were framed as inevitable, routine events. In only a few instances was 'fixing and reporting' mentioned as a way that the providers dealt with problems that they could resolve. We found that generally healthcare providers do not prioritise reporting if a safety problem is fixed. We argue that fixing and forgetting patient safety problems encountered may not serve patient safety as well as fixing and reporting. The latter approach aligns with recent calls for patient safety to be more preventive. We consider implications for practice. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  8. Effective Capital Provision Within Government. Methodologies for Right-Sizing Base Infrastructure

    DTIC Science & Technology

    2005-01-01

    unknown distributions, since they more accurately represent the complexity of real -world problems. Forecasting uncertain future demand flows is critical to...ordering system with no time lags and no additional costs for instantaneous delivery, shortage and holding costs would be eliminated, because the...order a fixed quantity, Q. 4.1.4 Analyzed Time Step Time is an important dimension in inventory models, since the way the system changes over time affects

  9. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining.

    PubMed

    Hero, Alfred O; Rajaratnam, Bala

    2016-01-01

    When can reliable inference be drawn in fue "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data". Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.

  10. EAS fluctuation approach to primary mass composition investigation

    NASA Technical Reports Server (NTRS)

    Stamenov, J. N.; Janminchev, V. D.

    1985-01-01

    The analysis of muon and electron fluctuation distribution shapes by statistical method of invers problem solution gives the possibility to obtain the relative contribution values of the five main primary nuclei groups. The method is model-independent for a big class of interaction models and can give good results for observation levels not too far from the development maximum and for the selection of showers with fixed sizes and zenith angles not bigger than 30 deg.

  11. A survey of methods of feasible directions for the solution of optimal control problems

    NASA Technical Reports Server (NTRS)

    Polak, E.

    1972-01-01

    Three methods of feasible directions for optimal control are reviewed. These methods are an extension of the Frank-Wolfe method, a dual method devised by Pironneau and Polack, and a Zontendijk method. The categories of continuous optimal control problems are shown as: (1) fixed time problems with fixed initial state, free terminal state, and simple constraints on the control; (2) fixed time problems with inequality constraints on both the initial and the terminal state and no control constraints; (3) free time problems with inequality constraints on the initial and terminal states and simple constraints on the control; and (4) fixed time problems with inequality state space contraints and constraints on the control. The nonlinear programming algorithms are derived for each of the methods in its associated category.

  12. Parameterized Complexity of k-Anonymity: Hardness and Tractability

    NASA Astrophysics Data System (ADS)

    Bonizzoni, Paola; Della Vedova, Gianluca; Dondi, Riccardo; Pirola, Yuri

    The problem of publishing personal data without giving up privacy is becoming increasingly important. A precise formalization that has been recently proposed is the k-anonymity, where the rows of a table are partitioned in clusters of size at least k and all rows in a cluster become the same tuple after the suppression of some entries. The natural optimization problem, where the goal is to minimize the number of suppressed entries, is hard even when the stored values are over a binary alphabet or the table consists of a bounded number of columns. In this paper we study how the complexity of the problem is influenced by different parameters. First we show that the problem is W[1]-hard when parameterized by the value of the solution (and k). Then we exhibit a fixed-parameter algorithm when the problem is parameterized by the number of columns and the number of different values in any column.

  13. Parameterizing by the Number of Numbers

    NASA Astrophysics Data System (ADS)

    Fellows, Michael R.; Gaspers, Serge; Rosamond, Frances A.

    The usefulness of parameterized algorithmics has often depended on what Niedermeier has called "the art of problem parameterization". In this paper we introduce and explore a novel but general form of parameterization: the number of numbers. Several classic numerical problems, such as Subset Sum, Partition, 3-Partition, Numerical 3-Dimensional Matching, and Numerical Matching with Target Sums, have multisets of integers as input. We initiate the study of parameterizing these problems by the number of distinct integers in the input. We rely on an FPT result for Integer Linear Programming Feasibility to show that all the above-mentioned problems are fixed-parameter tractable when parameterized in this way. In various applied settings, problem inputs often consist in part of multisets of integers or multisets of weighted objects (such as edges in a graph, or jobs to be scheduled). Such number-of-numbers parameterized problems often reduce to subproblems about transition systems of various kinds, parameterized by the size of the system description. We consider several core problems of this kind relevant to number-of-numbers parameterization. Our main hardness result considers the problem: given a non-deterministic Mealy machine M (a finite state automaton outputting a letter on each transition), an input word x, and a census requirement c for the output word specifying how many times each letter of the output alphabet should be written, decide whether there exists a computation of M reading x that outputs a word y that meets the requirement c. We show that this problem is hard for W[1]. If the question is whether there exists an input word x such that a computation of M on x outputs a word that meets c, the problem becomes fixed-parameter tractable.

  14. A multi-stage drop-the-losers design for multi-arm clinical trials.

    PubMed

    Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher

    2017-02-01

    Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.

  15. A fixed energy fixed angle inverse scattering in interior transmission problem

    NASA Astrophysics Data System (ADS)

    Chen, Lung-Hui

    2017-06-01

    We study the inverse acoustic scattering problem in mathematical physics. The problem is to recover the index of refraction in an inhomogeneous medium by measuring the scattered wave fields in the far field. We transform the problem to the interior transmission problem in the study of the Helmholtz equation. We find an inverse uniqueness on the scatterer with a knowledge of a fixed interior transmission eigenvalue. By examining the solution in a series of spherical harmonics in the far field, we can determine uniquely the perturbation source for the radially symmetric perturbations.

  16. The dependence of halo mass on galaxy size at fixed stellar mass using weak lensing

    NASA Astrophysics Data System (ADS)

    Charlton, Paul J. L.; Hudson, Michael J.; Balogh, Michael L.; Khatri, Sumeet

    2017-12-01

    Stellar mass has been shown to correlate with halo mass, with non-negligible scatter. The stellar mass-size and luminosity-size relationships of galaxies also show significant scatter in galaxy size at fixed stellar mass. It is possible that, at fixed stellar mass and galaxy colour, the halo mass is correlated with galaxy size. Galaxy-galaxy lensing allows us to measure the mean masses of dark matter haloes for stacked samples of galaxies. We extend the analysis of the galaxies in the CFHTLenS catalogue by fitting single Sérsic surface brightness profiles to the lens galaxies in order to recover half-light radius values, allowing us to determine halo masses for lenses according to their size. Comparing our halo masses and sizes to baselines for that stellar mass yields a differential measurement of the halo mass-galaxy size relationship at fixed stellar mass, defined as Mh(M_{*}) ∝ r_{eff}^{η }(M_{*}). We find that, on average, our lens galaxies have an η = 0.42 ± 0.12, i.e. larger galaxies live in more massive dark matter haloes. The η is strongest for high-mass luminous red galaxies. Investigation of this relationship in hydrodynamical simulations suggests that, at a fixed M*, satellite galaxies have a larger η and greater scatter in the Mh and reff relationship compared to central galaxies.

  17. Joint detection and tracking of size-varying infrared targets based on block-wise sparse decomposition

    NASA Astrophysics Data System (ADS)

    Li, Miao; Lin, Zaiping; Long, Yunli; An, Wei; Zhou, Yiyu

    2016-05-01

    The high variability of target size makes small target detection in Infrared Search and Track (IRST) a challenging task. A joint detection and tracking method based on block-wise sparse decomposition is proposed to address this problem. For detection, the infrared image is divided into overlapped blocks, and each block is weighted on the local image complexity and target existence probabilities. Target-background decomposition is solved by block-wise inexact augmented Lagrange multipliers. For tracking, label multi-Bernoulli (LMB) tracker tracks multiple targets taking the result of single-frame detection as input, and provides corresponding target existence probabilities for detection. Unlike fixed-size methods, the proposed method can accommodate size-varying targets, due to no special assumption for the size and shape of small targets. Because of exact decomposition, classical target measurements are extended and additional direction information is provided to improve tracking performance. The experimental results show that the proposed method can effectively suppress background clutters, detect and track size-varying targets in infrared images.

  18. Design Methods and Optimization for Morphing Aircraft

    NASA Technical Reports Server (NTRS)

    Crossley, William A.

    2005-01-01

    This report provides a summary of accomplishments made during this research effort. The major accomplishments are in three areas. The first is the use of a multiobjective optimization strategy to help identify potential morphing features that uses an existing aircraft sizing code to predict the weight, size and performance of several fixed-geometry aircraft that are Pareto-optimal based upon on two competing aircraft performance objectives. The second area has been titled morphing as an independent variable and formulates the sizing of a morphing aircraft as an optimization problem in which the amount of geometric morphing for various aircraft parameters are included as design variables. This second effort consumed most of the overall effort on the project. The third area involved a more detailed sizing study of a commercial transport aircraft that would incorporate a morphing wing to possibly enable transatlantic point-to-point passenger service.

  19. Stochastic modeling and simulation of reaction-diffusion system with Hill function dynamics.

    PubMed

    Chen, Minghan; Li, Fei; Wang, Shuo; Cao, Young

    2017-03-14

    Stochastic simulation of reaction-diffusion systems presents great challenges for spatiotemporal biological modeling and simulation. One widely used framework for stochastic simulation of reaction-diffusion systems is reaction diffusion master equation (RDME). Previous studies have discovered that for the RDME, when discretization size approaches zero, reaction time for bimolecular reactions in high dimensional domains tends to infinity. In this paper, we demonstrate that in the 1D domain, highly nonlinear reaction dynamics given by Hill function may also have dramatic change when discretization size is smaller than a critical value. Moreover, we discuss methods to avoid this problem: smoothing over space, fixed length smoothing over space and a hybrid method. Our analysis reveals that the switch-like Hill dynamics reduces to a linear function of discretization size when the discretization size is small enough. The three proposed methods could correctly (under certain precision) simulate Hill function dynamics in the microscopic RDME system.

  20. On the galaxy-halo connection in the EAGLE simulation

    NASA Astrophysics Data System (ADS)

    Desmond, Harry; Mao, Yao-Yuan; Wechsler, Risa H.; Crain, Robert A.; Schaye, Joop

    2017-10-01

    Empirical models of galaxy formation require assumptions about the correlations between galaxy and halo properties. These may be calibrated against observations or inferred from physical models such as hydrodynamical simulations. In this Letter, we use the EAGLE simulation to investigate the correlation of galaxy size with halo properties. We motivate this analysis by noting that the common assumption of angular momentum partition between baryons and dark matter in rotationally supported galaxies overpredicts both the spread in the stellar mass-size relation and the anticorrelation of size and velocity residuals, indicating a problem with the galaxy-halo connection it implies. We find the EAGLE galaxy population to perform significantly better on both statistics, and trace this success to the weakness of the correlations of galaxy size with halo mass, concentration and spin at fixed stellar mass. Using these correlations in empirical models will enable fine-grained aspects of galaxy scalings to be matched.

  1. A Cell-Centered Multigrid Algorithm for All Grid Sizes

    NASA Technical Reports Server (NTRS)

    Gjesdal, Thor

    1996-01-01

    Multigrid methods are optimal; that is, their rate of convergence is independent of the number of grid points, because they use a nested sequence of coarse grids to represent different scales of the solution. This nesting does, however, usually lead to certain restrictions of the permissible size of the discretised problem. In cases where the modeler is free to specify the whole problem, such constraints are of little importance because they can be taken into consideration from the outset. We consider the situation in which there are other competing constraints on the resolution. These restrictions may stem from the physical problem (e.g., if the discretised operator contains experimental data measured on a fixed grid) or from the need to avoid limitations set by the hardware. In this paper we discuss a modification to the cell-centered multigrid algorithm, so that it can be used br problems with any resolution. We discuss in particular a coarsening strategy and choice of intergrid transfer operators that can handle grids with both an even or odd number of cells. The method is described and applied to linear equations obtained by discretization of two- and three-dimensional second-order elliptic PDEs.

  2. Fully polynomial-time approximation scheme for a special case of a quadratic Euclidean 2-clustering problem

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Khandeev, V. I.

    2016-02-01

    The strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters of given sizes (cardinalities) minimizing the sum (over both clusters) of the intracluster sums of squared distances from the elements of the clusters to their centers is considered. It is assumed that the center of one of the sought clusters is specified at the desired (arbitrary) point of space (without loss of generality, at the origin), while the center of the other one is unknown and determined as the mean value over all elements of this cluster. It is shown that unless P = NP, there is no fully polynomial-time approximation scheme for this problem, and such a scheme is substantiated in the case of a fixed space dimension.

  3. A new approach on auxiliary vehicle assignment in capacitated location routing problem

    NASA Astrophysics Data System (ADS)

    Bashiri, Mahdi; Rasoulinejad, Zeinab; Fallahzade, Ehsan

    2016-03-01

    The location routing problem (LRP) considers locating depots and vehicle routing decisions simultaneously. In classic LRP the number of customers in each route depends on the capacity of the vehicle. In this paper a capacitated LRP model with auxiliary vehicle assignment is presented in which the length of each route is not restricted by main vehicle capacity. Two kinds of vehicles are considered: main vehicles with higher capacity and fixed cost and auxiliary vehicles with lower capacity and fixed cost. The auxiliary vehicles can be added to the transportation system as an alternative strategy to cover the capacity limitations and they are just used to transfer goods from depots to vehicles and cannot serve the customers by themselves. To show the applicability of the proposed model, some numerical examples derived from the well-known instances are used. Moreover the model has been solved by some meta-heuristics for large sized instances. The results show the efficiency of the proposed model and the solution approach, considering the classic model and the exact solution approach, respectively.

  4. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  5. The NASA Altitude Wind Tunnel (AWT): Its role in advanced icing research and development

    NASA Technical Reports Server (NTRS)

    Blaha, B. J.; Shaw, R. J.

    1985-01-01

    Currently experimental aircraft icing research is severely hampered by limitations of ground icing simulation facilities. Existing icing facilities do not have the size, speed, altitude, and icing environment simulation capabilities to allow accurate studies to be made of icing problems occurring for high speed fixed wing aircraft and rotorcraft. Use of the currently dormant NASA Lewis Altitude Wind Tunnel (AWT), as a proposed high speed propulsion and adverse weather facility, would allow many such problems to be studied. The characteristics of the AWT related to adverse weather simulation and in particular to icing simulation are discussed, and potential icing research programs using the AWT are also included.

  6. Efficient Integration of Coupled Electrical-Chemical Systems in Multiscale Neuronal Simulations

    PubMed Central

    Brocke, Ekaterina; Bhalla, Upinder S.; Djurfeldt, Mikael; Hellgren Kotaleski, Jeanette; Hanke, Michael

    2016-01-01

    Multiscale modeling and simulations in neuroscience is gaining scientific attention due to its growing importance and unexplored capabilities. For instance, it can help to acquire better understanding of biological phenomena that have important features at multiple scales of time and space. This includes synaptic plasticity, memory formation and modulation, homeostasis. There are several ways to organize multiscale simulations depending on the scientific problem and the system to be modeled. One of the possibilities is to simulate different components of a multiscale system simultaneously and exchange data when required. The latter may become a challenging task for several reasons. First, the components of a multiscale system usually span different spatial and temporal scales, such that rigorous analysis of possible coupling solutions is required. Then, the components can be defined by different mathematical formalisms. For certain classes of problems a number of coupling mechanisms have been proposed and successfully used. However, a strict mathematical theory is missing in many cases. Recent work in the field has not so far investigated artifacts that may arise during coupled integration of different approximation methods. Moreover, in neuroscience, the coupling of widely used numerical fixed step size solvers may lead to unexpected inefficiency. In this paper we address the question of possible numerical artifacts that can arise during the integration of a coupled system. We develop an efficient strategy to couple the components comprising a multiscale test problem in neuroscience. We introduce an efficient coupling method based on the second-order backward differentiation formula (BDF2) numerical approximation. The method uses an adaptive step size integration with an error estimation proposed by Skelboe (2000). The method shows a significant advantage over conventional fixed step size solvers used in neuroscience for similar problems. We explore different coupling strategies that define the organization of computations between system components. We study the importance of an appropriate approximation of exchanged variables during the simulation. The analysis shows a substantial impact of these aspects on the solution accuracy in the application to our multiscale neuroscientific test problem. We believe that the ideas presented in the paper may essentially contribute to the development of a robust and efficient framework for multiscale brain modeling and simulations in neuroscience. PMID:27672364

  7. Efficient Integration of Coupled Electrical-Chemical Systems in Multiscale Neuronal Simulations.

    PubMed

    Brocke, Ekaterina; Bhalla, Upinder S; Djurfeldt, Mikael; Hellgren Kotaleski, Jeanette; Hanke, Michael

    2016-01-01

    Multiscale modeling and simulations in neuroscience is gaining scientific attention due to its growing importance and unexplored capabilities. For instance, it can help to acquire better understanding of biological phenomena that have important features at multiple scales of time and space. This includes synaptic plasticity, memory formation and modulation, homeostasis. There are several ways to organize multiscale simulations depending on the scientific problem and the system to be modeled. One of the possibilities is to simulate different components of a multiscale system simultaneously and exchange data when required. The latter may become a challenging task for several reasons. First, the components of a multiscale system usually span different spatial and temporal scales, such that rigorous analysis of possible coupling solutions is required. Then, the components can be defined by different mathematical formalisms. For certain classes of problems a number of coupling mechanisms have been proposed and successfully used. However, a strict mathematical theory is missing in many cases. Recent work in the field has not so far investigated artifacts that may arise during coupled integration of different approximation methods. Moreover, in neuroscience, the coupling of widely used numerical fixed step size solvers may lead to unexpected inefficiency. In this paper we address the question of possible numerical artifacts that can arise during the integration of a coupled system. We develop an efficient strategy to couple the components comprising a multiscale test problem in neuroscience. We introduce an efficient coupling method based on the second-order backward differentiation formula (BDF2) numerical approximation. The method uses an adaptive step size integration with an error estimation proposed by Skelboe (2000). The method shows a significant advantage over conventional fixed step size solvers used in neuroscience for similar problems. We explore different coupling strategies that define the organization of computations between system components. We study the importance of an appropriate approximation of exchanged variables during the simulation. The analysis shows a substantial impact of these aspects on the solution accuracy in the application to our multiscale neuroscientific test problem. We believe that the ideas presented in the paper may essentially contribute to the development of a robust and efficient framework for multiscale brain modeling and simulations in neuroscience.

  8. Implant/tooth-connected restorations utilizing screw-fixed attachments: a survey of 3,096 sites in function for 3 to 14 years.

    PubMed

    Fugazzotto, P A; Kirsch, A; Ackermann, K L; Neuendorff, G

    1999-01-01

    Numerous problems have been reported following various therapies used to attach natural teeth to implants beneath a fixed prosthesis. This study documents the results of 843 consecutive patients treated with 1,206 natural tooth/implant-supported prostheses utilizing 3,096 screw-fixed attachments. After 3 to 14 years in function, only 9 intrusion problems were noted. All problems were associated with fractured or lost screws. This report demonstrates the efficacy of such a treatment approach when a natural tooth/implant-supported fixed prosthesis is contemplated.

  9. Meshless method for solving fixed boundary problem of plasma equilibrium

    NASA Astrophysics Data System (ADS)

    Imazawa, Ryota; Kawano, Yasunori; Itami, Kiyoshi

    2015-07-01

    This study solves the Grad-Shafranov equation with a fixed plasma boundary by utilizing a meshless method for the first time. Previous studies have utilized a finite element method (FEM) to solve an equilibrium inside the fixed separatrix. In order to avoid difficulties of FEM (such as mesh problem, difficulty of coding, expensive calculation cost), this study focuses on the meshless methods, especially RBF-MFS and KANSA's method to solve the fixed boundary problem. The results showed that CPU time of the meshless methods was ten to one hundred times shorter than that of FEM to obtain the same accuracy.

  10. Relationship of follicle size and concentrations of estradiol among cows exhibiting or not exhibiting estrus during a fixed-time AI protocol

    USDA-ARS?s Scientific Manuscript database

    Cows exhibiting estrus near the time of fixed-time AI had greater pregnancy success than cows showing no estrus. The objective of this study was to determine the relationship between follicle size and peak estradiol concentration between cows that did or did not exhibit estrus during a fixed-time AI...

  11. [Survival strategy of photosynthetic organisms. 1. Variability of the extent of light-harvesting pigment aggregation as a structural factor optimizing the function of oligomeric photosynthetic antenna. Model calculations].

    PubMed

    Fetisova, Z G

    2004-01-01

    In accordance with our concept of rigorous optimization of photosynthetic machinery by a functional criterion, this series of papers continues purposeful search in natural photosynthetic units (PSU) for the basic principles of their organization that we predicted theoretically for optimal model light-harvesting systems. This approach allowed us to determine the basic principles for the organization of a PSU of any fixed size. This series of papers deals with the problem of structural optimization of light-harvesting antenna of variable size controlled in vivo by the light intensity during the growth of organisms, which accentuates the problem of antenna structure optimization because optimization requirements become more stringent as the PSU increases in size. In this work, using mathematical modeling for the functioning of natural PSUs, we have shown that the aggregation of pigments of model light-harvesting antenna, being one of universal optimizing factors, furthermore allows controlling the antenna efficiency if the extent of pigment aggregation is a variable parameter. In this case, the efficiency of antenna increases with the size of the elementary antenna aggregate, thus ensuring the high efficiency of the PSU irrespective of its size; i.e., variation in the extent of pigment aggregation controlled by the size of light-harvesting antenna is biologically expedient.

  12. Relationship of follicle size and concentrations of estradiol among cows that do and do not exhibit estrus during a fixed-time AI protocol

    USDA-ARS?s Scientific Manuscript database

    Cows that exhibited estrus around the time of fixed-time AI had greater pregnancy success compared to cows that did not. The objective of this study was to determine the relationship between follicle size and peak estradiol concentration between cows that did or did not exhibit estrus during a fixed...

  13. Optimal Integration of Departures and Arrivals in Terminal Airspace

    NASA Technical Reports Server (NTRS)

    Xue, Min; Zelinski, Shannon Jean

    2013-01-01

    Coordination of operations with spatially and temporally shared resources, such as route segments, fixes, and runways, improves the efficiency of terminal airspace management. Problems in this category are, in general, computationally difficult compared to conventional scheduling problems. This paper presents a fast time algorithm formulation using a non-dominated sorting genetic algorithm (NSGA). It was first applied to a test problem introduced in existing literature. An experiment with a test problem showed that new methods can solve the 20 aircraft problem in fast time with a 65% or 440 second delay reduction using shared departure fixes. In order to test its application in a more realistic and complicated problem, the NSGA algorithm was applied to a problem in LAX terminal airspace, where interactions between 28% of LAX arrivals and 10% of LAX departures are resolved by spatial separation in current operations, which may introduce unnecessary delays. In this work, three types of separations - spatial, temporal, and hybrid separations - were formulated using the new algorithm. The hybrid separation combines both temporal and spatial separations. Results showed that although temporal separation achieved less delay than spatial separation with a small uncertainty buffer, spatial separation outperformed temporal separation when the uncertainty buffer was increased. Hybrid separation introduced much less delay than both spatial and temporal approaches. For a total of 15 interacting departures and arrivals, when compared to spatial separation, the delay reduction of hybrid separation varied between 11% or 3.1 minutes and 64% or 10.7 minutes corresponding to an uncertainty buffer from 0 to 60 seconds. Furthermore, as a comparison with the NSGA algorithm, a First-Come-First-Serve based heuristic method was implemented for the hybrid separation. Experiments showed that the results from the NSGA algorithm have 9% to 42% less delay than the heuristic method with varied uncertainty buffer sizes.

  14. A restricted Steiner tree problem is solved by Geometric Method II

    NASA Astrophysics Data System (ADS)

    Lin, Dazhi; Zhang, Youlin; Lu, Xiaoxu

    2013-03-01

    The minimum Steiner tree problem has wide application background, such as transportation system, communication network, pipeline design and VISL, etc. It is unfortunately that the computational complexity of the problem is NP-hard. People are common to find some special problems to consider. In this paper, we first put forward a restricted Steiner tree problem, which the fixed vertices are in the same side of one line L and we find a vertex on L such the length of the tree is minimal. By the definition and the complexity of the Steiner tree problem, we know that the complexity of this problem is also Np-complete. In the part one, we have considered there are two fixed vertices to find the restricted Steiner tree problem. Naturally, we consider there are three fixed vertices to find the restricted Steiner tree problem. And we also use the geometric method to solve such the problem.

  15. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hero, Alfred O.; Rajaratnam, Bala

    When can reliable inference be drawn in the ‘‘Big Data’’ context? This article presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large-scale inference. In large-scale data applications like genomics, connectomics, and eco-informatics, the data set is often variable rich but sample starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than the number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for ‘‘Big Data.’’ Sample complexity, however, hasmore » received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; and 3) the purely high-dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high-dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. We demonstrate various regimes of correlation mining based on the unifying perspective of high-dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.« less

  16. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    PubMed Central

    Hero, Alfred O.; Rajaratnam, Bala

    2015-01-01

    When can reliable inference be drawn in fue “Big Data” context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for “Big Data”. Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks. PMID:27087700

  17. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    DOE PAGES

    Hero, Alfred O.; Rajaratnam, Bala

    2015-12-09

    When can reliable inference be drawn in the ‘‘Big Data’’ context? This article presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large-scale inference. In large-scale data applications like genomics, connectomics, and eco-informatics, the data set is often variable rich but sample starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than the number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for ‘‘Big Data.’’ Sample complexity, however, hasmore » received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; and 3) the purely high-dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high-dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. We demonstrate various regimes of correlation mining based on the unifying perspective of high-dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.« less

  18. Implicit integration methods for dislocation dynamics

    DOE PAGES

    Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...

    2015-01-20

    In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less

  19. Autonomous reinforcement learning with experience replay.

    PubMed

    Wawrzyński, Paweł; Tanwani, Ajay Kumar

    2013-05-01

    This paper considers the issues of efficiency and autonomy that are required to make reinforcement learning suitable for real-life control tasks. A real-time reinforcement learning algorithm is presented that repeatedly adjusts the control policy with the use of previously collected samples, and autonomously estimates the appropriate step-sizes for the learning updates. The algorithm is based on the actor-critic with experience replay whose step-sizes are determined on-line by an enhanced fixed point algorithm for on-line neural network training. An experimental study with simulated octopus arm and half-cheetah demonstrates the feasibility of the proposed algorithm to solve difficult learning control problems in an autonomous way within reasonably short time. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Hybrid Pareto artificial bee colony algorithm for multi-objective single machine group scheduling problem with sequence-dependent setup times and learning effects.

    PubMed

    Yue, Lei; Guan, Zailin; Saif, Ullah; Zhang, Fei; Wang, Hao

    2016-01-01

    Group scheduling is significant for efficient and cost effective production system. However, there exist setup times between the groups, which require to decrease it by sequencing groups in an efficient way. Current research is focused on a sequence dependent group scheduling problem with an aim to minimize the makespan in addition to minimize the total weighted tardiness simultaneously. In most of the production scheduling problems, the processing time of jobs is assumed as fixed. However, the actual processing time of jobs may be reduced due to "learning effect". The integration of sequence dependent group scheduling problem with learning effects has been rarely considered in literature. Therefore, current research considers a single machine group scheduling problem with sequence dependent setup times and learning effects simultaneously. A novel hybrid Pareto artificial bee colony algorithm (HPABC) with some steps of genetic algorithm is proposed for current problem to get Pareto solutions. Furthermore, five different sizes of test problems (small, small medium, medium, large medium, large) are tested using proposed HPABC. Taguchi method is used to tune the effective parameters of the proposed HPABC for each problem category. The performance of HPABC is compared with three famous multi objective optimization algorithms, improved strength Pareto evolutionary algorithm (SPEA2), non-dominated sorting genetic algorithm II (NSGAII) and particle swarm optimization algorithm (PSO). Results indicate that HPABC outperforms SPEA2, NSGAII and PSO and gives better Pareto optimal solutions in terms of diversity and quality for almost all the instances of the different sizes of problems.

  1. Fixed or Rotating Night Shift Work Undertaken by Women: Implications for Fertility and Miscarriage.

    PubMed

    Fernandez, Renae C; Marino, Jennifer L; Varcoe, Tamara J; Davis, Scott; Moran, Lisa J; Rumbold, Alice R; Brown, Hannah M; Whitrow, Melissa J; Davies, Michael J; Moore, Vivienne M

    2016-03-01

    This review summarizes the evidence concerning effects of night shift work on women's reproductive health, specifically difficulty in conceiving and miscarriage. We distinguish between fixed night shift and rotating night shift, as the population subgroups exposed, the social and biological mechanisms, and the magnitude of effects are likely to differ; of note, women working fixed night shift are known to have high tolerance for this schedule. We identified two relevant systematic reviews with meta-analyses and five additional studies. Night shift work may give rise to menstrual cycle disturbances, but effect sizes are imprecise. Endometriosis may be elevated in night shift workers, but evidence is only preliminary. Adequate data are lacking to assess associations between night shift work and infertility or time to pregnancy. The weight of evidence begins to point to working at night, whether in fixed or rotating shifts, as a risk factor for miscarriage. There are many methodological problems with this literature, with substantial variation in the definitions of night shift and schedule types making comparisons between studies difficult and pooling across studies questionable. Nevertheless, there appears to be grounds for caution and counselling where women have concerns about night shift work and their reproductive health. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  2. Detecting and correcting for publication bias in meta-analysis - A truncated normal distribution approach.

    PubMed

    Zhu, Qiaohao; Carriere, K C

    2016-01-01

    Publication bias can significantly limit the validity of meta-analysis when trying to draw conclusion about a research question from independent studies. Most research on detection and correction for publication bias in meta-analysis focus mainly on funnel plot-based methodologies or selection models. In this paper, we formulate publication bias as a truncated distribution problem, and propose new parametric solutions. We develop methodologies of estimating the underlying overall effect size and the severity of publication bias. We distinguish the two major situations, in which publication bias may be induced by: (1) small effect size or (2) large p-value. We consider both fixed and random effects models, and derive estimators for the overall mean and the truncation proportion. These estimators will be obtained using maximum likelihood estimation and method of moments under fixed- and random-effects models, respectively. We carried out extensive simulation studies to evaluate the performance of our methodology, and to compare with the non-parametric Trim and Fill method based on funnel plot. We find that our methods based on truncated normal distribution perform consistently well, both in detecting and correcting publication bias under various situations.

  3. Finite-Time and Fixed-Time Cluster Synchronization With or Without Pinning Control.

    PubMed

    Liu, Xiwei; Chen, Tianping

    2018-01-01

    In this paper, the finite-time and fixed-time cluster synchronization problem for complex networks with or without pinning control are discussed. Finite-time (or fixed-time) synchronization has been a hot topic in recent years, which means that the network can achieve synchronization in finite-time, and the settling time depends on the initial values for finite-time synchronization (or the settling time is bounded by a constant for any initial values for fixed-time synchronization). To realize the finite-time and fixed-time cluster synchronization, some simple distributed protocols with or without pinning control are designed and the effectiveness is rigorously proved. Several sufficient criteria are also obtained to clarify the effects of coupling terms for finite-time and fixed-time cluster synchronization. Especially, when the cluster number is one, the cluster synchronization becomes the complete synchronization problem; when the network has only one node, the coupling term between nodes will disappear, and the synchronization problem becomes the simplest master-slave case, which also includes the stability problem for nonlinear systems like neural networks. All these cases are also discussed. Finally, numerical simulations are presented to demonstrate the correctness of obtained theoretical results.

  4. Transforming graph states using single-qubit operations.

    PubMed

    Dahlberg, Axel; Wehner, Stephanie

    2018-07-13

    Stabilizer states form an important class of states in quantum information, and are of central importance in quantum error correction. Here, we provide an algorithm for deciding whether one stabilizer (target) state can be obtained from another stabilizer (source) state by single-qubit Clifford operations (LC), single-qubit Pauli measurements (LPM) and classical communication (CC) between sites holding the individual qubits. What is more, we provide a recipe to obtain the sequence of LC+LPM+CC operations which prepare the desired target state from the source state, and show how these operations can be applied in parallel to reach the target state in constant time. Our algorithm has applications in quantum networks, quantum computing, and can also serve as a design tool-for example, to find transformations between quantum error correcting codes. We provide a software implementation of our algorithm that makes this tool easier to apply. A key insight leading to our algorithm is to show that the problem is equivalent to one in graph theory, which is to decide whether some graph G ' is a vertex-minor of another graph G The vertex-minor problem is, in general, [Formula: see text]-Complete, but can be solved efficiently on graphs which are not too complex. A measure of the complexity of a graph is the rank-width which equals the Schmidt-rank width of a subclass of stabilizer states called graph states, and thus intuitively is a measure of entanglement. Here, we show that the vertex-minor problem can be solved in time O (| G | 3 ), where | G | is the size of the graph G , whenever the rank-width of G and the size of G ' are bounded. Our algorithm is based on techniques by Courcelle for solving fixed parameter tractable problems, where here the relevant fixed parameter is the rank width. The second half of this paper serves as an accessible but far from exhausting introduction to these concepts, that could be useful for many other problems in quantum information.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Author(s).

  5. A Study of the Errors of the Fixed-Node Approximation in Diffusion Monte Carlo

    NASA Astrophysics Data System (ADS)

    Rasch, Kevin M.

    Quantum Monte Carlo techniques stochastically evaluate integrals to solve the many-body Schrodinger equation. QMC algorithms scale favorably in the number of particles simulated and enjoy applicability to a wide range of quantum systems. Advances in the core algorithms of the method and their implementations paired with the steady development of computational assets have carried the applicability of QMC beyond analytically treatable systems, such as the Homogeneous Electron Gas, and have extended QMC's domain to treat atoms, molecules, and solids containing as many as several hundred electrons. FN-DMC projects out the ground state of a wave function subject to constraints imposed by our ansatz to the problem. The constraints imposed by the fixed-node Approximation are poorly understood. One key step in developing any scientific theory or method is to qualify where the theory is inaccurate and to quantify how erroneous it is under these circumstances. I investigate the fixed-node errors as they evolve over changing charge density, system size, and effective core potentials. I begin by studying a simple system for which the nodes of the trial wave function can be solved almost exactly. By comparing two trial wave functions, a single determinant wave function flawed in a known way and a nearly exact wave function, I show that the fixed-node error increases when the charge density is increased. Next, I investigate a sequence of Lithium systems increasing in size from a single atom, to small molecules, up to the bulk metal form. Over these systems, FN-DMC calculations consistently recover 95% or more of the correlation energy of the system. Given this accuracy, I make a prediction for the binding energy of Li4 molecule. Last, I turn to analyzing the fixed-node error in first and second row atoms and their molecules. With the appropriate pseudo-potentials, these systems are iso-electronic, show similar geometries and states. One would expect with identical number of particles involved in the calculation, errors in the respective total energies of the two iso-electronic species would be quite similar. I observe, instead, that the first row atoms and their molecules have errors larger by twice or more in size. I identify a cause for this difference in iso-electronic species. The fixed-node errors in all of these cases are calculated by careful comparison to experimental results, showing that FN-DMC to be a robust tool for understanding quantum systems and also a method for new investigations into the nature of many-body effects.

  6. Accounting for missing data in the estimation of contemporary genetic effective population size (N(e) ).

    PubMed

    Peel, D; Waples, R S; Macbeth, G M; Do, C; Ovenden, J R

    2013-03-01

    Theoretical models are often applied to population genetic data sets without fully considering the effect of missing data. Researchers can deal with missing data by removing individuals that have failed to yield genotypes and/or by removing loci that have failed to yield allelic determinations, but despite their best efforts, most data sets still contain some missing data. As a consequence, realized sample size differs among loci, and this poses a problem for unbiased methods that must explicitly account for random sampling error. One commonly used solution for the calculation of contemporary effective population size (N(e) ) is to calculate the effective sample size as an unweighted mean or harmonic mean across loci. This is not ideal because it fails to account for the fact that loci with different numbers of alleles have different information content. Here we consider this problem for genetic estimators of contemporary effective population size (N(e) ). To evaluate bias and precision of several statistical approaches for dealing with missing data, we simulated populations with known N(e) and various degrees of missing data. Across all scenarios, one method of correcting for missing data (fixed-inverse variance-weighted harmonic mean) consistently performed the best for both single-sample and two-sample (temporal) methods of estimating N(e) and outperformed some methods currently in widespread use. The approach adopted here may be a starting point to adjust other population genetics methods that include per-locus sample size components. © 2012 Blackwell Publishing Ltd.

  7. From cat's eyes to disjoint multicellular natural convection flow in tall tilted cavities

    NASA Astrophysics Data System (ADS)

    Nicolás, Alfredo; Báez, Elsa; Bermúdez, Blanca

    2011-07-01

    Numerical results of two-dimensional natural convection problems, in air-filled tall cavities, are reported to study the change of the cat's eyes flow as some parameters vary, the aspect ratio A and the angle of inclination ϕ of the cavity, with the Rayleigh number Ra mostly fixed; explicitly, the range of the variation is given by 12⩽A⩽20 and 0°⩽ϕ⩽270°; about Ra=1.1×10. A novelty contribution of this work is the transition from the cat's eyes changes, as A varies, to a disjoint multicellular flow, as ϕ varies. These flows may be modeled by the unsteady Boussinesq approximation in stream function and vorticity variables which is solved with a fixed point iterative process applied to the nonlinear elliptic system that results after time discretization. The validation of the results relies on mesh size and time-step independence studies.

  8. Connections between the Sznajd model with general confidence rules and graph theory

    NASA Astrophysics Data System (ADS)

    Timpanaro, André M.; Prado, Carmen P. C.

    2012-10-01

    The Sznajd model is a sociophysics model that is used to model opinion propagation and consensus formation in societies. Its main feature is that its rules favor bigger groups of agreeing people. In a previous work, we generalized the bounded confidence rule in order to model biases and prejudices in discrete opinion models. In that work, we applied this modification to the Sznajd model and presented some preliminary results. The present work extends what we did in that paper. We present results linking many of the properties of the mean-field fixed points, with only a few qualitative aspects of the confidence rule (the biases and prejudices modeled), finding an interesting connection with graph theory problems. More precisely, we link the existence of fixed points with the notion of strongly connected graphs and the stability of fixed points with the problem of finding the maximal independent sets of a graph. We state these results and present comparisons between the mean field and simulations in Barabási-Albert networks, followed by the main mathematical ideas and appendices with the rigorous proofs of our claims and some graph theory concepts, together with examples. We also show that there is no qualitative difference in the mean-field results if we require that a group of size q>2, instead of a pair, of agreeing agents be formed before they attempt to convince other sites (for the mean field, this would coincide with the q-voter model).

  9. On the galaxy–halo connection in the EAGLE simulation

    DOE PAGES

    Desmond, Harry; Mao, Yao -Yuan; Wechsler, Risa H.; ...

    2017-06-13

    Empirical models of galaxy formation require assumptions about the correlations between galaxy and halo properties. These may be calibrated against observations or inferred from physical models such as hydrodynamical simulations. In this Letter, we use the EAGLE simulation to investigate the correlation of galaxy size with halo properties. We motivate this analysis by noting that the common assumption of angular momentum partition between baryons and dark matter in rotationally supported galaxies overpredicts both the spread in the stellar mass–size relation and the anticorrelation of size and velocity residuals, indicating a problem with the galaxy–halo connection it implies. We find themore » EAGLE galaxy population to perform significantly better on both statistics, and trace this success to the weakness of the correlations of galaxy size with halo mass, concentration and spin at fixed stellar mass. Here by, using these correlations in empirical models will enable fine-grained aspects of galaxy scalings to be matched.« less

  10. Study on improving the turbidity measurement of the absolute coagulation rate constant.

    PubMed

    Sun, Zhiwei; Liu, Jie; Xu, Shenghua

    2006-05-23

    The existing theories dealing with the evaluation of the absolute coagulation rate constant by turbidity measurement were experimentally tested for different particle-sized (radius = a) suspensions at incident wavelengths (lambda) ranging from near-infrared to ultraviolet light. When the size parameter alpha = 2pi a/lambda > 3, the rate constant data from previous theories for fixed-sized particles show significant inconsistencies at different light wavelengths. We attribute this problem to the imperfection of these theories in describing the light scattering from doublets through their evaluation of the extinction cross section. The evaluations of the rate constants by all previous theories become untenable as the size parameter increases and therefore hampers the applicable range of the turbidity measurement. By using the T-matrix method, we present a robust solution for evaluating the extinction cross section of doublets formed in the aggregation. Our experiments show that this new approach is effective in extending the applicability range of the turbidity methodology and increasing measurement accuracy.

  11. On the galaxy–halo connection in the EAGLE simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desmond, Harry; Mao, Yao -Yuan; Wechsler, Risa H.

    Empirical models of galaxy formation require assumptions about the correlations between galaxy and halo properties. These may be calibrated against observations or inferred from physical models such as hydrodynamical simulations. In this Letter, we use the EAGLE simulation to investigate the correlation of galaxy size with halo properties. We motivate this analysis by noting that the common assumption of angular momentum partition between baryons and dark matter in rotationally supported galaxies overpredicts both the spread in the stellar mass–size relation and the anticorrelation of size and velocity residuals, indicating a problem with the galaxy–halo connection it implies. We find themore » EAGLE galaxy population to perform significantly better on both statistics, and trace this success to the weakness of the correlations of galaxy size with halo mass, concentration and spin at fixed stellar mass. Here by, using these correlations in empirical models will enable fine-grained aspects of galaxy scalings to be matched.« less

  12. Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2016-01-01

    This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…

  13. Multidirectional hybrid algorithm for the split common fixed point problem and application to the split common null point problem.

    PubMed

    Li, Xia; Guo, Meifang; Su, Yongfu

    2016-01-01

    In this article, a new multidirectional monotone hybrid iteration algorithm for finding a solution to the split common fixed point problem is presented for two countable families of quasi-nonexpansive mappings in Banach spaces. Strong convergence theorems are proved. The application of the result is to consider the split common null point problem of maximal monotone operators in Banach spaces. Strong convergence theorems for finding a solution of the split common null point problem are derived. This iteration algorithm can accelerate the convergence speed of iterative sequence. The results of this paper improve and extend the recent results of Takahashi and Yao (Fixed Point Theory Appl 2015:87, 2015) and many others .

  14. A Newton-Krylov method with an approximate analytical Jacobian for implicit solution of Navier-Stokes equations on staggered overset-curvilinear grids with immersed boundaries.

    PubMed

    Asgharzadeh, Hafez; Borazjani, Iman

    2017-02-15

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 - 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.

  15. A Newton–Krylov method with an approximate analytical Jacobian for implicit solution of Navier–Stokes equations on staggered overset-curvilinear grids with immersed boundaries

    PubMed Central

    Asgharzadeh, Hafez; Borazjani, Iman

    2016-01-01

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 – 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80–90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future. PMID:28042172

  16. A Newton-Krylov method with an approximate analytical Jacobian for implicit solution of Navier-Stokes equations on staggered overset-curvilinear grids with immersed boundaries

    NASA Astrophysics Data System (ADS)

    Asgharzadeh, Hafez; Borazjani, Iman

    2017-02-01

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for non-linear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form a preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42-74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal and full Jacobian, respectivley, when the stretching factor was increased. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.

  17. OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods

    NASA Technical Reports Server (NTRS)

    Heath, Christopher M.; Gray, Justin S.

    2012-01-01

    The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.

  18. Parallel Simulation of Three-Dimensional Free-Surface Fluid Flow Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BAER,THOMAS A.; SUBIA,SAMUEL R.; SACKINGER,PHILIP A.

    2000-01-18

    We describe parallel simulations of viscous, incompressible, free surface, Newtonian fluid flow problems that include dynamic contact lines. The Galerlin finite element method was used to discretize the fully-coupled governing conservation equations and a ''pseudo-solid'' mesh mapping approach was used to determine the shape of the free surface. In this approach, the finite element mesh is allowed to deform to satisfy quasi-static solid mechanics equations subject to geometric or kinematic constraints on the boundaries. As a result, nodal displacements must be included in the set of problem unknowns. Issues concerning the proper constraints along the solid-fluid dynamic contact line inmore » three dimensions are discussed. Parallel computations are carried out for an example taken from the coating flow industry, flow in the vicinity of a slot coater edge. This is a three-dimensional free-surface problem possessing a contact line that advances at the web speed in one region but transitions to static behavior in another part of the flow domain. Discussion focuses on parallel speedups for fixed problem size, a class of problems of immediate practical importance.« less

  19. Ontogenetic loss of phenotypic plasticity of age at metamorphosis in tadpoles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hensley, F.R.

    1993-12-01

    Amphibian larvae exhibit phenotypic plasticity in size at metamorphosis and duration of the larval period. I used Pseudacris crucifer tadpoles to test two models for predicting tadpole age and size at metamorphosis under changing environmental conditions. The Wilbur-Collins model states that metamorphosis is initiated as a function of a tadpole's size and relative growth rate, and predicts that changes in growth rate throughout the larval period affect age and size at metamorphosis. An alternative model, the fixed-rate model, states that age at metamorphosis is fixed early in larval life, and subsequent changes in growth rate will have no effect onmore » the length of the larval period. My results confirm that food supplies affect both age and size at metamorphosis, but developmental rates became fixed at approximately Gosner (1960) stages 35-37. Neither model completely predicted these results. I suggest that the generally accepted Wilbur-Collins model is improved by incorporating a point of fixed developmental timing. Growth trajectories predicted from this modified model fit the results of this study better than trajectories based on either of the original models. The results of this study suggests a constraint that limits the simultaneous optimization of age and size at metamorphosis. 32 refs., 5 figs., 1 tab.« less

  20. Design and Dynamic Modeling of Flexible Rehabilitation Mechanical Glove

    NASA Astrophysics Data System (ADS)

    Lin, M. X.; Ma, G. Y.; Liu, F. Q.; Sun, Q. S.; Song, A. Q.

    2018-03-01

    Rehabilitation gloves are equipment that helps rehabilitation doctors perform finger rehabilitation training, which can greatly reduce the labour intensity of rehabilitation doctors and make more people receive finger rehabilitation training. In the light of the defects of the existing rehabilitation gloves such as complicated structure and stiff movement, a rehabilitation mechanical glove is designed, which provides driving force by using the air cylinder and adopts a rope-spring mechanism to ensure the flexibility of the movement. In order to fit the size of different hands, the bandage ring which can adjust size is used to make the mechanism fixed. In the interest of solve the complex problem of dynamic equation, dynamic simulation is carried out by using Adams to obtain the motion curve, which is easy to optimize the structure of ring position.

  1. H2, fixed architecture, control design for large scale systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1990-01-01

    The H2, fixed architecture, control problem is a classic linear quadratic Gaussian (LQG) problem whose solution is constrained to be a linear time invariant compensator with a decentralized processing structure. The compensator can be made of p independent subcontrollers, each of which has a fixed order and connects selected sensors to selected actuators. The H2, fixed architecture, control problem allows the design of simplified feedback systems needed to control large scale systems. Its solution becomes more complicated, however, as more constraints are introduced. This work derives the necessary conditions for optimality for the problem and studies their properties. It is found that the filter and control problems couple when the architecture constraints are introduced, and that the different subcontrollers must be coordinated in order to achieve global system performance. The problem requires the simultaneous solution of highly coupled matrix equations. The use of homotopy is investigated as a numerical tool, and its convergence properties studied. It is found that the general constrained problem may have multiple stabilizing solutions, and that these solutions may be local minima or saddle points for the quadratic cost. The nature of the solution is not invariant when the parameters of the system are changed. Bifurcations occur, and a solution may continuously transform into a nonstabilizing compensator. Using a modified homotopy procedure, fixed architecture compensators are derived for models of large flexible structures to help understand the properties of the constrained solutions and compare them to the corresponding unconstrained ones.

  2. Effect of distance-related heterogeneity on population size estimates from point counts

    USGS Publications Warehouse

    Efford, Murray G.; Dawson, Deanna K.

    2009-01-01

    Point counts are used widely to index bird populations. Variation in the proportion of birds counted is a known source of error, and for robust inference it has been advocated that counts be converted to estimates of absolute population size. We used simulation to assess nine methods for the conduct and analysis of point counts when the data included distance-related heterogeneity of individual detection probability. Distance from the observer is a ubiquitous source of heterogeneity, because nearby birds are more easily detected than distant ones. Several recent methods (dependent double-observer, time of first detection, time of detection, independent multiple-observer, and repeated counts) do not account for distance-related heterogeneity, at least in their simpler forms. We assessed bias in estimates of population size by simulating counts with fixed radius w over four time intervals (occasions). Detection probability per occasion was modeled as a half-normal function of distance with scale parameter sigma and intercept g(0) = 1.0. Bias varied with sigma/w; values of sigma inferred from published studies were often 50% for a 100-m fixed-radius count. More critically, the bias of adjusted counts sometimes varied more than that of unadjusted counts, and inference from adjusted counts would be less robust. The problem was not solved by using mixture models or including distance as a covariate. Conventional distance sampling performed well in simulations, but its assumptions are difficult to meet in the field. We conclude that no existing method allows effective estimation of population size from point counts.

  3. Parallel scalability of Hartree-Fock calculations

    NASA Astrophysics Data System (ADS)

    Chow, Edmond; Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R.

    2015-03-01

    Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree-Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.

  4. Comparison of statistical models to estimate parasite growth rate in the induced blood stage malaria model.

    PubMed

    Wockner, Leesa F; Hoffmann, Isabell; O'Rourke, Peter; McCarthy, James S; Marquart, Louise

    2017-08-25

    The efficacy of vaccines aimed at inhibiting the growth of malaria parasites in the blood can be assessed by comparing the growth rate of parasitaemia in the blood of subjects treated with a test vaccine compared to controls. In studies using induced blood stage malaria (IBSM), a type of controlled human malaria infection, parasite growth rate has been measured using models with the intercept on the y-axis fixed to the inoculum size. A set of statistical models was evaluated to determine an optimal methodology to estimate parasite growth rate in IBSM studies. Parasite growth rates were estimated using data from 40 subjects published in three IBSM studies. Data was fitted using 12 statistical models: log-linear, sine-wave with the period either fixed to 48 h or not fixed; these models were fitted with the intercept either fixed to the inoculum size or not fixed. All models were fitted by individual, and overall by study using a mixed effects model with a random effect for the individual. Log-linear models and sine-wave models, with the period fixed or not fixed, resulted in similar parasite growth rate estimates (within 0.05 log 10 parasites per mL/day). Average parasite growth rate estimates for models fitted by individual with the intercept fixed to the inoculum size were substantially lower by an average of 0.17 log 10 parasites per mL/day (range 0.06-0.24) compared with non-fixed intercept models. Variability of parasite growth rate estimates across the three studies analysed was substantially higher (3.5 times) for fixed-intercept models compared with non-fixed intercept models. The same tendency was observed in models fitted overall by study. Modelling data by individual or overall by study had minimal effect on parasite growth estimates. The analyses presented in this report confirm that fixing the intercept to the inoculum size influences parasite growth estimates. The most appropriate statistical model to estimate the growth rate of blood-stage parasites in IBSM studies appears to be a log-linear model fitted by individual and with the intercept estimated in the log-linear regression. Future studies should use this model to estimate parasite growth rates.

  5. Finding fixed satellite service orbital allotments with a k-permutation algorithm

    NASA Technical Reports Server (NTRS)

    Reilly, Charles H.; Mount-Campbell, Clark A.; Gonsalvez, David J. A.

    1990-01-01

    A satellite system synthesis problem, the satellite location problem (SLP), is addressed. In SLP, orbital locations (longitudes) are allotted to geostationary satellites in the fixed satellite service. A linear mixed-integer programming model is presented that views SLP as a combination of two problems: the problem of ordering the satellites and the problem of locating the satellites given some ordering. A special-purpose heuristic procedure, a k-permutation algorithm, has been developed to find solutions to SLPs. Solutions to small sample problems are presented and analyzed on the basis of calculated interferences.

  6. Clinical Nursing Records Study

    DTIC Science & Technology

    1991-08-01

    In-depth assessment of current AMEDD nursing documentation system used in fixed facilities; 2 - 4) development, implementation and assessment of...used in fixed facilities to: a) identify system problems; b) identify potential solutions to problems; c) set priorities fc problem resolution; d...enhance compatibility between any " hard copy" forms the group might develop and automation requirements. Discussions were also held with personnel from

  7. Enhancing Scalability and Efficiency of the TOUGH2_MP for LinuxClusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Keni; Wu, Yu-Shu

    2006-04-17

    TOUGH2{_}MP, the parallel version TOUGH2 code, has been enhanced by implementing more efficient communication schemes. This enhancement is achieved through reducing the amount of small-size messages and the volume of large messages. The message exchange speed is further improved by using non-blocking communications for both linear and nonlinear iterations. In addition, we have modified the AZTEC parallel linear-equation solver to nonblocking communication. Through the improvement of code structuring and bug fixing, the new version code is now more stable, while demonstrating similar or even better nonlinear iteration converging speed than the original TOUGH2 code. As a result, the new versionmore » of TOUGH2{_}MP is improved significantly in its efficiency. In this paper, the scalability and efficiency of the parallel code are demonstrated by solving two large-scale problems. The testing results indicate that speedup of the code may depend on both problem size and complexity. In general, the code has excellent scalability in memory requirement as well as computing time.« less

  8. A Numerical Study of Three Moving-Grid Methods for One-Dimensional Partial Differential Equations Which Are Based on the Method of Lines

    NASA Astrophysics Data System (ADS)

    Furzeland, R. M.; Verwer, J. G.; Zegeling, P. A.

    1990-08-01

    In recent years, several sophisticated packages based on the method of lines (MOL) have been developed for the automatic numerical integration of time-dependent problems in partial differential equations (PDEs), notably for problems in one space dimension. These packages greatly benefit from the very successful developments of automatic stiff ordinary differential equation solvers. However, from the PDE point of view, they integrate only in a semiautomatic way in the sense that they automatically adjust the time step sizes, but use just a fixed space grid, chosen a priori, for the entire calculation. For solutions possessing sharp spatial transitions that move, e.g., travelling wave fronts or emerging boundary and interior layers, a grid held fixed for the entire calculation is computationally inefficient, since for a good solution this grid often must contain a very large number of nodes. In such cases methods which attempt automatically to adjust the sizes of both the space and the time steps are likely to be more successful in efficiently resolving critical regions of high spatial and temporal activity. Methods and codes that operate this way belong to the realm of adaptive or moving-grid methods. Following the MOL approach, this paper is devoted to an evaluation and comparison, mainly based on extensive numerical tests, of three moving-grid methods for 1D problems, viz., the finite-element method of Miller and co-workers, the method published by Petzold, and a method based on ideas adopted from Dorfi and Drury. Our examination of these three methods is aimed at assessing which is the most suitable from the point of view of retaining the acknowledged features of reliability, robustness, and efficiency of the conventional MOL approach. Therefore, considerable attention is paid to the temporal performance of the methods.

  9. Controller design for global fixed-time synchronization of delayed neural networks with discontinuous activations.

    PubMed

    Wang, Leimin; Zeng, Zhigang; Hu, Junhao; Wang, Xiaoping

    2017-03-01

    This paper addresses the controller design problem for global fixed-time synchronization of delayed neural networks (DNNs) with discontinuous activations. To solve this problem, adaptive control and state feedback control laws are designed. Then based on the two controllers and two lemmas, the error system is proved to be globally asymptotically stable and even fixed-time stable. Moreover, some sufficient and easy checked conditions are derived to guarantee the global synchronization of drive and response systems in fixed time. It is noted that the settling time functional for fixed-time synchronization is independent on initial conditions. Our fixed-time synchronization results contain the finite-time results as the special cases by choosing different values of the two controllers. Finally, theoretical results are supported by numerical simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. On Determining if Tree-based Networks Contain Fixed Trees.

    PubMed

    Anaya, Maria; Anipchenko-Ulaj, Olga; Ashfaq, Aisha; Chiu, Joyce; Kaiser, Mahedi; Ohsawa, Max Shoji; Owen, Megan; Pavlechko, Ella; St John, Katherine; Suleria, Shivam; Thompson, Keith; Yap, Corrine

    2016-05-01

    We address an open question of Francis and Steel about phylogenetic networks and trees. They give a polynomial time algorithm to decide if a phylogenetic network, N, is tree-based and pose the problem: given a fixed tree T and network N, is N based on T? We show that it is [Formula: see text]-hard to decide, by reduction from 3-Dimensional Matching (3DM) and further that the problem is fixed-parameter tractable.

  11. Winter home-range characteristics of American Marten (Martes americana) in Northern Wisconsin

    Treesearch

    Joseph B. Dumyahn; Patrick A. Zollner

    2007-01-01

    We estimated home-range size for American marten (Martes americana) in northern Wisconsin during the winter months of 2001-2004, and compared the proportion of cover-type selection categories (highly used, neutral and avoided) among home-ranges (95% fixed-kernel), core areas (50% fixed-kernel) and the study area. Average winter homerange size was 3....

  12. 40 CFR 113.4 - Size classes and associated liability limits for fixed onshore oil storage facilities, 1,000...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 23 2012-07-01 2012-07-01 false Size classes and associated liability limits for fixed onshore oil storage facilities, 1,000 barrels or less capacity. 113.4 Section 113.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS LIABILITY LIMITS FOR...

  13. 40 CFR 113.4 - Size classes and associated liability limits for fixed onshore oil storage facilities, 1,000...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 22 2014-07-01 2013-07-01 true Size classes and associated liability limits for fixed onshore oil storage facilities, 1,000 barrels or less capacity. 113.4 Section 113.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS LIABILITY LIMITS FOR...

  14. A k-permutation algorithm for Fixed Satellite Service orbital allotments

    NASA Technical Reports Server (NTRS)

    Reilly, Charles H.; Mount-Campbell, Clark A.; Gonsalvez, David J. A.

    1988-01-01

    A satellite system synthesis problem, the satellite location problem (SLP), is addressed in this paper. In SLP, orbital locations (longitudes) are allotted to geostationary satellites in the Fixed Satellite Service. A linear mixed-integer programming model is presented that views SLP as a combination of two problems: (1) the problem of ordering the satellites and (2) the problem of locating the satellites given some ordering. A special-purpose heuristic procedure, a k-permutation algorithm, that has been developed to find solutions to SLPs formulated in the manner suggested is described. Solutions to small example problems are presented and analyzed.

  15. Sources of spurious force oscillations from an immersed boundary method for moving-body problems

    NASA Astrophysics Data System (ADS)

    Lee, Jongho; Kim, Jungwoo; Choi, Haecheon; Yang, Kyung-Soo

    2011-04-01

    When a discrete-forcing immersed boundary method is applied to moving-body problems, it produces spurious force oscillations on a solid body. In the present study, we identify two sources of these force oscillations. One source is from the spatial discontinuity in the pressure across the immersed boundary when a grid point located inside a solid body becomes that of fluid with a body motion. The addition of mass source/sink together with momentum forcing proposed by Kim et al. [J. Kim, D. Kim, H. Choi, An immersed-boundary finite volume method for simulations of flow in complex geometries, Journal of Computational Physics 171 (2001) 132-150] reduces the spurious force oscillations by alleviating this pressure discontinuity. The other source is from the temporal discontinuity in the velocity at the grid points where fluid becomes solid with a body motion. The magnitude of velocity discontinuity decreases with decreasing the grid spacing near the immersed boundary. Four moving-body problems are simulated by varying the grid spacing at a fixed computational time step and at a constant CFL number, respectively. It is found that the spurious force oscillations decrease with decreasing the grid spacing and increasing the computational time step size, but they depend more on the grid spacing than on the computational time step size.

  16. Concurrent credit portfolio losses

    PubMed Central

    Sicking, Joachim; Schäfer, Rudi

    2018-01-01

    We consider the problem of concurrent portfolio losses in two non-overlapping credit portfolios. In order to explore the full statistical dependence structure of such portfolio losses, we estimate their empirical pairwise copulas. Instead of a Gaussian dependence, we typically find a strong asymmetry in the copulas. Concurrent large portfolio losses are much more likely than small ones. Studying the dependences of these losses as a function of portfolio size, we moreover reveal that not only large portfolios of thousands of contracts, but also medium-sized and small ones with only a few dozens of contracts exhibit notable portfolio loss correlations. Anticipated idiosyncratic effects turn out to be negligible. These are troublesome insights not only for investors in structured fixed-income products, but particularly for the stability of the financial sector. JEL codes: C32, F34, G21, G32, H81. PMID:29425246

  17. Concurrent credit portfolio losses.

    PubMed

    Sicking, Joachim; Guhr, Thomas; Schäfer, Rudi

    2018-01-01

    We consider the problem of concurrent portfolio losses in two non-overlapping credit portfolios. In order to explore the full statistical dependence structure of such portfolio losses, we estimate their empirical pairwise copulas. Instead of a Gaussian dependence, we typically find a strong asymmetry in the copulas. Concurrent large portfolio losses are much more likely than small ones. Studying the dependences of these losses as a function of portfolio size, we moreover reveal that not only large portfolios of thousands of contracts, but also medium-sized and small ones with only a few dozens of contracts exhibit notable portfolio loss correlations. Anticipated idiosyncratic effects turn out to be negligible. These are troublesome insights not only for investors in structured fixed-income products, but particularly for the stability of the financial sector. JEL codes: C32, F34, G21, G32, H81.

  18. Free-Suspension Residual Flexibility Testing of Space Station Pathfinder: Comparison to Fixed-Base Results

    NASA Technical Reports Server (NTRS)

    Tinker, Michael L.

    1998-01-01

    Application of the free-suspension residual flexibility modal test method to the International Space Station Pathfinder structure is described. The Pathfinder, a large structure of the general size and weight of Space Station module elements, was also tested in a large fixed-base fixture to simulate Shuttle Orbiter payload constraints. After correlation of the Pathfinder finite element model to residual flexibility test data, the model was coupled to a fixture model, and constrained modes and frequencies were compared to fixed-base test. modes. The residual flexibility model compared very favorably to results of the fixed-base test. This is the first known direct comparison of free-suspension residual flexibility and fixed-base test results for a large structure. The model correlation approach used by the author for residual flexibility data is presented. Frequency response functions (FRF) for the regions of the structure that interface with the environment (a test fixture or another structure) are shown to be the primary tools for model correlation that distinguish or characterize the residual flexibility approach. A number of critical issues related to use of the structure interface FRF for correlating the model are then identified and discussed, including (1) the requirement of prominent stiffness lines, (2) overcoming problems with measurement noise which makes the antiresonances or minima in the functions difficult to identify, and (3) the use of interface stiffness and lumped mass perturbations to bring the analytical responses into agreement with test data. It is shown that good comparison of analytical-to-experimental FRF is the key to obtaining good agreement of the residual flexibility values.

  19. Influence of preservative and mounting media on the size and shape of monogenean sclerites.

    PubMed

    Fankoua, Severin-Oscar; Bitja Nyom, Arnold R; Bahanak, Dieu Ne Dort; Bilong Bilong, Charles F; Pariselle, Antoine

    2017-08-01

    Based on Cichlidogyrus sp. (Monogenea, Ancyrocephalidae) specimens from Hemichromis sp. hosts, we tested the influence of different methods to fix/preserve samples/specimens [frozen material, alcohol or formalin preserved, museum process for fish preservation (fixed in formalin and preserved in alcohol)] and different media used to mount the slides [tap water, glycerin ammonium picrate (GAP), Hoyer's one (HM)] on the size/shape of sclerotized parts of monogenean specimens. The results show that the use of HM significantly increases the size of haptoral sclerites [marginal hooks I, II, IV, V, and VI; dorsal bar length, width, distance between auricles and auricle length, ventral bar length and width], and changes their shape [angle opening between shaft and guard (outer and inner roots) in both ventral and dorsal anchors, ventral bar much wider, dorsal one less curved]. This influence seems to be reduced when specimens/samples are fixed in formalin. The systematics of Monogenea being based on the size and shape of their sclerotized parts, to prevent misidentifications or description of invalid new species, we recommend the use of GAP as mounting medium; Hoyer's one should be restricted to monogenean specimens fixed for a long time which are more shrunken.

  20. Exact solution for a two-phase Stefan problem with variable latent heat and a convective boundary condition at the fixed face

    NASA Astrophysics Data System (ADS)

    Bollati, Julieta; Tarzia, Domingo A.

    2018-04-01

    Recently, in Tarzia (Thermal Sci 21A:1-11, 2017) for the classical two-phase Lamé-Clapeyron-Stefan problem an equivalence between the temperature and convective boundary conditions at the fixed face under a certain restriction was obtained. Motivated by this article we study the two-phase Stefan problem for a semi-infinite material with a latent heat defined as a power function of the position and a convective boundary condition at the fixed face. An exact solution is constructed using Kummer functions in case that an inequality for the convective transfer coefficient is satisfied generalizing recent works for the corresponding one-phase free boundary problem. We also consider the limit to our problem when that coefficient goes to infinity obtaining a new free boundary problem, which has been recently studied in Zhou et al. (J Eng Math 2017. https://doi.org/10.1007/s10665-017-9921-y).

  1. Energy-efficient approach to minimizing the energy consumption in an extended job-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Tang, Dunbing; Dai, Min

    2015-09-01

    The traditional production planning and scheduling problems consider performance indicators like time, cost and quality as optimization objectives in manufacturing processes. However, environmentally-friendly factors like energy consumption of production have not been completely taken into consideration. Against this background, this paper addresses an approach to modify a given schedule generated by a production planning and scheduling system in a job shop floor, where machine tools can work at different cutting speeds. It can adjust the cutting speeds of the operations while keeping the original assignment and processing sequence of operations of each job fixed in order to obtain energy savings. First, the proposed approach, based on a mixed integer programming mathematical model, changes the total idle time of the given schedule to minimize energy consumption in the job shop floor while accepting the optimal solution of the scheduling objective, makespan. Then, a genetic-simulated annealing algorithm is used to explore the optimal solution due to the fact that the problem is strongly NP-hard. Finally, the effectiveness of the approach is performed smalland large-size instances, respectively. The experimental results show that the approach can save 5%-10% of the average energy consumption while accepting the optimal solution of the makespan in small-size instances. In addition, the average maximum energy saving ratio can reach to 13%. And it can save approximately 1%-4% of the average energy consumption and approximately 2.4% of the average maximum energy while accepting the near-optimal solution of the makespan in large-size instances. The proposed research provides an interesting point to explore an energy-aware schedule optimization for a traditional production planning and scheduling problem.

  2. Dissociation Predicts Later Attention Problems in Sexually Abused Children

    ERIC Educational Resources Information Center

    Kaplow, Julie B.; Hall, Erin; Koenen, Karestan C.; Dodge, Kenneth A.; Amaya-Jackson, Lisa

    2008-01-01

    Objective: The goals of this research are to develop and test a prospective model of attention problems in sexually abused children that includes fixed variables (e.g., gender), trauma, and disclosure-related pathways. Methods: At Time 1, fixed variables, trauma variables, and stress reactions upon disclosure were assessed in 156 children aged…

  3. High order multi-grid methods to solve the Poisson equation

    NASA Technical Reports Server (NTRS)

    Schaffer, S.

    1981-01-01

    High order multigrid methods based on finite difference discretization of the model problem are examined. The following methods are described: (1) a fixed high order FMG-FAS multigrid algorithm; (2) the high order methods; and (3) results are presented on four problems using each method with the same underlying fixed FMG-FAS algorithm.

  4. A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol.

    PubMed

    Zeng, Ping; Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun

    2017-01-01

    In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on-all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications.

  5. Scoring from Contests

    PubMed Central

    Penn, Elizabeth Maggie

    2014-01-01

    This article presents a new model for scoring alternatives from “contest” outcomes. The model is a generalization of the method of paired comparison to accommodate comparisons between arbitrarily sized sets of alternatives in which outcomes are any division of a fixed prize. Our approach is also applicable to contests between varying quantities of alternatives. We prove that under a reasonable condition on the comparability of alternatives, there exists a unique collection of scores that produces accurate estimates of the overall performance of each alternative and satisfies a well-known axiom regarding choice probabilities. We apply the method to several problems in which varying choice sets and continuous outcomes may create problems for standard scoring methods. These problems include measuring centrality in network data and the scoring of political candidates via a “feeling thermometer.” In the latter case, we also use the method to uncover and solve a potential difficulty with common methods of rescaling thermometer data to account for issues of interpersonal comparability. PMID:24748759

  6. A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol

    PubMed Central

    Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun

    2017-01-01

    In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on—all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications. PMID:28399157

  7. Multistep integration formulas for the numerical integration of the satellite problem

    NASA Technical Reports Server (NTRS)

    Lundberg, J. B.; Tapley, B. D.

    1981-01-01

    The use of two Class 2/fixed mesh/fixed order/multistep integration packages of the PECE type for the numerical integration of the second order, nonlinear, ordinary differential equation of the satellite orbit problem. These two methods are referred to as the general and the second sum formulations. The derivation of the basic equations which characterize each formulation and the role of the basic equations in the PECE algorithm are discussed. Possible starting procedures are examined which may be used to supply the initial set of values required by the fixed mesh/multistep integrators. The results of the general and second sum integrators are compared to the results of various fixed step and variable step integrators.

  8. Expected Fitness Gains of Randomized Search Heuristics for the Traveling Salesperson Problem.

    PubMed

    Nallaperuma, Samadhi; Neumann, Frank; Sudholt, Dirk

    2017-01-01

    Randomized search heuristics are frequently applied to NP-hard combinatorial optimization problems. The runtime analysis of randomized search heuristics has contributed tremendously to our theoretical understanding. Recently, randomized search heuristics have been examined regarding their achievable progress within a fixed-time budget. We follow this approach and present a fixed-budget analysis for an NP-hard combinatorial optimization problem. We consider the well-known Traveling Salesperson Problem (TSP) and analyze the fitness increase that randomized search heuristics are able to achieve within a given fixed-time budget. In particular, we analyze Manhattan and Euclidean TSP instances and Randomized Local Search (RLS), (1+1) EA and (1+[Formula: see text]) EA algorithms for the TSP in a smoothed complexity setting, and derive the lower bounds of the expected fitness gain for a specified number of generations.

  9. A fixed-memory moving, expanding window for obtaining scatter corrections in X-ray CT and other stochastic averages

    NASA Astrophysics Data System (ADS)

    Levine, Zachary H.; Pintar, Adam L.

    2015-11-01

    A simple algorithm for averaging a stochastic sequence of 1D arrays in a moving, expanding window is provided. The samples are grouped in bins which increase exponentially in size so that a constant fraction of the samples is retained at any point in the sequence. The algorithm is shown to have particular relevance for a class of Monte Carlo sampling problems which includes one characteristic of iterative reconstruction in computed tomography. The code is available in the CPC program library in both Fortran 95 and C and is also available in R through CRAN.

  10. A Real Options Approach to Quantity and Cost Optimization for Lifetime and Bridge Buys of Parts

    DTIC Science & Technology

    2015-04-30

    fixed EOS of 40 years and a fixed WACC of 3%, decreases to a minimum and then increases. The minimum of this curve gives the optimum buy size for...considered in both analyses. For a 3% WACC , as illustrated in Figure 9(a), the DES method gives an optimum buy size range of 2,923–3,191 with an average...Hence, both methods are consistent in determining the optimum lifetime/bridge buy size. To further verify this consistency, other WACC values

  11. Sectional methods for aggregation problems: application to volcanic eruptions

    NASA Astrophysics Data System (ADS)

    Rossi, E.

    2016-12-01

    Particle aggregation is a general problem that is common to several scientific disciplines such as planetary formation, food industry and aerosol sciences. So far the ordinary approach to this class of problems relies on the solution of the Smoluchowski Coagulation Equations (SCE), a set of Ordinary Differential Equations (ODEs) derived from the Population Balance Equations (PBE), which basically describe the change in time of an initial grain-size distribution due to the interaction of "single" particles. The frequency of particles collisions and their sticking efficiencies depend on the specific problem under analysis, but the mathematical framework and the possible solutions to the ODEs seem to be somehow discipline-independent and very general. In this work we will focus on the problem of volcanic ash aggregation, since it represents an extreme case of complexity that can be relevant also to other disciplines. In fact volcanic ash aggregates observed during the fallouts are characterized by relevant porosities and they do not fit with simplified descriptions based on monomer-like structures or fractal geometries. In this work we propose a bidimensional approach to the PBEs which uses additive (mass) and non-additive (volume) internal descriptors in order to better characterize the evolution of volcanic ash aggregation. In particular we used sectional methods (fixed-pivot) to discretize the internal parameters space. This algorithm has been applied to a one dimensional volcanic plume model in order to investigate how the Total Grain Size Distribution (TGSD) changes throughout the erupted column in real scenarios (i.e. Eyjafjallajokull 2010, Sakurajima 2013 and Mt. Saint Helens 1980).

  12. Dynamic Multiple Work Stealing Strategy for Flexible Load Balancing

    NASA Astrophysics Data System (ADS)

    Adnan; Sato, Mitsuhisa

    Lazy-task creation is an efficient method of overcoming the overhead of the grain-size problem in parallel computing. Work stealing is an effective load balancing strategy for parallel computing. In this paper, we present dynamic work stealing strategies in a lazy-task creation technique for efficient fine-grain task scheduling. The basic idea is to control load balancing granularity depending on the number of task parents in a stack. The dynamic-length strategy of work stealing uses run-time information, which is information on the load of the victim, to determine the number of tasks that a thief is allowed to steal. We compare it with the bottommost first work stealing strategy used in StackThread/MP, and the fixed-length strategy of work stealing, where a thief requests to steal a fixed number of tasks, as well as other multithreaded frameworks such as Cilk and OpenMP task implementations. The experiments show that the dynamic-length strategy of work stealing performs well in irregular workloads such as in UTS benchmarks, as well as in regular workloads such as Fibonacci, Strassen's matrix multiplication, FFT, and Sparse-LU factorization. The dynamic-length strategy works better than the fixed-length strategy because it is more flexible than the latter; this strategy can avoid load imbalance due to overstealing.

  13. Medial Patellofemoral Ligament Reconstruction Procedure Using a Suspensory Femoral Fixation System

    PubMed Central

    Nakagawa, Shuji; Arai, Yuji; Kan, Hiroyuki; Ueshima, Keiichiro; Ikoma, Kazuya; Terauchi, Ryu; Kubo, Toshikazu

    2013-01-01

    Recurrent patellar dislocation has recently been treated with anatomic medial patellofemoral ligament (MPFL) reconstruction using a semitendinosus muscle tendon. Although it is necessary to add tension to fix the tendon graft without loading excess stress on the patellofemoral joint, adjustment of the tension can be difficult. To resolve this problem, we developed an MPFL reconstruction procedure using the ToggleLoc Fixation Device (Biomet, Warsaw, IN), in which the semitendinosus muscle tendon is folded and used as a double-bundle tendon graft and 2 bone tunnels and 1 bone tunnel are made on the patellar and femoral sides, respectively. The patellar side of the tendon graft is fixed with an EndoButton (Smith & Nephew, London, England), and the femoral side is fixed with the ToggleLoc. Stepless adjustment of tension of the tendon graft is possible by reducing the size of the loop of the ToggleLoc hung onto the tendon graft. It may be useful to position the patella in the center of the femoral sulcus by confirming the patellofemoral joint fitting. Stability can be confirmed by loading lateral stress on the patella in the extended knee joint. This procedure is less invasive because opening of the lateral side of the femur is not necessary, and it may be useful for MPFL reconstruction. PMID:24892014

  14. Review of the inverse scattering problem at fixed energy in quantum mechanics

    NASA Technical Reports Server (NTRS)

    Sabatier, P. C.

    1972-01-01

    Methods of solution of the inverse scattering problem at fixed energy in quantum mechanics are presented. Scattering experiments of a beam of particles at a nonrelativisitic energy by a target made up of particles are analyzed. The Schroedinger equation is used to develop the quantum mechanical description of the system and one of several functions depending on the relative distance of the particles. The inverse problem is the construction of the potentials from experimental measurements.

  15. An effective detection algorithm for region duplication forgery in digital images

    NASA Astrophysics Data System (ADS)

    Yavuz, Fatih; Bal, Abdullah; Cukur, Huseyin

    2016-04-01

    Powerful image editing tools are very common and easy to use these days. This situation may cause some forgeries by adding or removing some information on the digital images. In order to detect these types of forgeries such as region duplication, we present an effective algorithm based on fixed-size block computation and discrete wavelet transform (DWT). In this approach, the original image is divided into fixed-size blocks, and then wavelet transform is applied for dimension reduction. Each block is processed by Fourier Transform and represented by circle regions. Four features are extracted from each block. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks are detected according to comparison metric results. The experimental results show that the proposed algorithm presents computational efficiency due to fixed-size circle block architecture.

  16. Decentralized control of large-scale systems: Fixed modes, sensitivity and parametric robustness. Ph.D. Thesis - Universite Paul Sabatier, 1985

    NASA Technical Reports Server (NTRS)

    Tarras, A.

    1987-01-01

    The problem of stabilization/pole placement under structural constraints of large scale linear systems is discussed. The existence of a solution to this problem is expressed in terms of fixed modes. The aim is to provide a bibliographic survey of the available results concerning the fixed modes (characterization, elimination, control structure selection to avoid them, control design in their absence) and to present the author's contribution to this problem which can be summarized by the use of the mode sensitivity concept to detect or to avoid them, the use of vibrational control to stabilize them, and the addition of parametric robustness considerations to design an optimal decentralized robust control.

  17. A scaling law for accretion zone sizes

    NASA Technical Reports Server (NTRS)

    Greenzweig, Yuval; Lissauer, Jack J.

    1987-01-01

    Current theories of runaway planetary accretion require small random velocities of the accreted particles. Two body gravitational accretion cross sections which ignore tidal perturbations of the Sun are not valid for the slow encounters which occur at low relative velocities. Wetherill and Cox have studied accretion cross sections for rocky protoplanets orbiting at 1 AU. Using analytic methods based on Hill's lunar theory, one can scale these results for protoplanets that occupy the same fraction of their Hill sphere as does a rocky body at 1 AU. Generalization to bodies of different sizes is achieved here by numerical integrations of the three-body problem. Starting at initial positions far from the accreting body, test particles are allowed to encounter the body once, and the cross section is computed. A power law is found relating the cross section to the radius of the accreting body (of fixed mass).

  18. Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, V. N.; Toussaint, U. V.; Timucin, D. A.

    2002-01-01

    We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum excitation gap. g min, = O(n 2(exp -n/2), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to 'the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.

  19. Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadius; vonToussaint, Udo V.; Timucin, Dogan A.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum exitation gap, gmin = O(n2(sup -n/2)), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.

  20. Recommended protocols for sampling macrofungi

    Treesearch

    Gregory M. Mueller; John Paul Schmit; Sabine M. Hubndorf Leif Ryvarden; Thomas E. O' Dell; D. Jean Lodge; Patrick R. Leacock; Milagro Mata; Loengrin Umania; Qiuxin (Florence) Wu; Daniel L. Czederpiltz

    2004-01-01

    This chapter discusses several issues regarding reommended protocols for sampling macrofungi: Opportunistic sampling of macrofungi, sampling conspicuous macrofungi using fixed-size, sampling small Ascomycetes using microplots, and sampling a fixed number of downed logs.

  1. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Effects of video-assisted training on employment-related social skills of adults with severe mental retardation.

    PubMed Central

    Morgan, R L; Salzberg, C L

    1992-01-01

    Two studies investigated effects of video-assisted training on employment-related social skills of adults with severe mental retardation. In video-assisted training, participants discriminated a model's behavior on videotape and received feedback from the trainer for responses to questions about video scenes. In the first study, 3 adults in an employment program participated in video-assisted training to request their supervisor's assistance when encountering work problems. Results indicated that participants discriminated the target behavior on video but effects did not generalize to the work setting for 2 participants until they rehearsed the behavior. In the second study, 2 participants were taught to fix and report four work problems using video-assisted procedures. Results indicated that after participants rehearsed how to fix and report one or two work problems, they began to fix and report the remaining problems with video-assisted training alone. PMID:1378826

  3. Tuning iteration space slicing based tiled multi-core code implementing Nussinov's RNA folding.

    PubMed

    Palkowski, Marek; Bielecki, Wlodzimierz

    2018-01-15

    RNA folding is an ongoing compute-intensive task of bioinformatics. Parallelization and improving code locality for this kind of algorithms is one of the most relevant areas in computational biology. Fortunately, RNA secondary structure approaches, such as Nussinov's recurrence, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. This allows us to apply powerful polyhedral compilation techniques based on the transitive closure of dependence graphs to generate parallel tiled code implementing Nussinov's RNA folding. Such techniques are within the iteration space slicing framework - the transitive dependences are applied to the statement instances of interest to produce valid tiles. The main problem at generating parallel tiled code is defining a proper tile size and tile dimension which impact parallelism degree and code locality. To choose the best tile size and tile dimension, we first construct parallel parametric tiled code (parameters are variables defining tile size). With this purpose, we first generate two nonparametric tiled codes with different fixed tile sizes but with the same code structure and then derive a general affine model, which describes all integer factors available in expressions of those codes. Using this model and known integer factors present in the mentioned expressions (they define the left-hand side of the model), we find unknown integers in this model for each integer factor available in the same fixed tiled code position and replace in this code expressions, including integer factors, with those including parameters. Then we use this parallel parametric tiled code to implement the well-known tile size selection (TSS) technique, which allows us to discover in a given search space the best tile size and tile dimension maximizing target code performance. For a given search space, the presented approach allows us to choose the best tile size and tile dimension in parallel tiled code implementing Nussinov's RNA folding. Experimental results, received on modern Intel multi-core processors, demonstrate that this code outperforms known closely related implementations when the length of RNA strands is bigger than 2500.

  4. (Un)Fixing Education

    ERIC Educational Resources Information Center

    Kuntz, Aaron M.; Petrovic, John E.

    2018-01-01

    In this article we consider the material dimensions of schooling as constitutive of the possibilities inherent in "fixing" education. We begin by mapping out the problem of "fixing education," pointing to the necrophilic tendencies of contemporary education--a desire to kill what otherwise might be life-giving. In this sense,…

  5. Resonant frequency analysis of Timoshenko nanowires with surface stress for different boundary conditions

    NASA Astrophysics Data System (ADS)

    He, Qilu; Lilley, Carmen M.

    2012-10-01

    The influence of both surface and shear effects on the resonant frequency of nanowires (NWs) was studied by incorporating the Young-Laplace equation with the Timoshenko beam theory. Face-centered-cubic metal NWs were studied. A dimensional analysis of the resonant frequencies for fixed-fixed gold (100) NWs were compared to molecular dynamic simulations. Silver NWs with diameters from 10 nm-500 nm were modeled as a cantilever, simply supported and fixed-fixed system for aspect ratios from 2.5-20 to identify the shear, surface, and size effects on the resonant frequencies. The shear effect was found to have a larger significance than surface effects when the aspect ratios were small (i.e., <5) regardless of size for the diameters modeled. Finally, as the aspect ratio grows, the surface effect becomes significant for the smaller diameter NWs.

  6. Fictitious domain method for fully resolved reacting gas-solid flow simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Longhui; Liu, Kai; You, Changfu

    2015-10-01

    Fully resolved simulation (FRS) for gas-solid multiphase flow considers solid objects as finite sized regions in flow fields and their behaviours are predicted by solving equations in both fluid and solid regions directly. Fixed mesh numerical methods, such as fictitious domain method, are preferred in solving FRS problems and have been widely researched. However, for reacting gas-solid flows no suitable fictitious domain numerical method has been developed. This work presents a new fictitious domain finite element method for FRS of reacting particulate flows. Low Mach number reacting flow governing equations are solved sequentially on a regular background mesh. Particles are immersed in the mesh and driven by their surface forces and torques integrated on immersed interfaces. Additional treatments on energy and surface reactions are developed. Several numerical test cases validated the method and a burning carbon particles array falling simulation proved the capability for solving moving reacting particle cluster problems.

  7. Markov Tracking for Agent Coordination

    NASA Technical Reports Server (NTRS)

    Washington, Richard; Lau, Sonie (Technical Monitor)

    1998-01-01

    Partially observable Markov decision processes (POMDPs) axe an attractive representation for representing agent behavior, since they capture uncertainty in both the agent's state and its actions. However, finding an optimal policy for POMDPs in general is computationally difficult. In this paper we present Markov Tracking, a restricted problem of coordinating actions with an agent or process represented as a POMDP Because the actions coordinate with the agent rather than influence its behavior, the optimal solution to this problem can be computed locally and quickly. We also demonstrate the use of the technique on sequential POMDPs, which can be used to model a behavior that follows a linear, acyclic trajectory through a series of states. By imposing a "windowing" restriction that restricts the number of possible alternatives considered at any moment to a fixed size, a coordinating action can be calculated in constant time, making this amenable to coordination with complex agents.

  8. POSTPROCESSING MIXED FINITE ELEMENT METHODS FOR SOLVING CAHN-HILLIARD EQUATION: METHODS AND ERROR ANALYSIS

    PubMed Central

    Wang, Wansheng; Chen, Long; Zhou, Jie

    2015-01-01

    A postprocessing technique for mixed finite element methods for the Cahn-Hilliard equation is developed and analyzed. Once the mixed finite element approximations have been computed at a fixed time on the coarser mesh, the approximations are postprocessed by solving two decoupled Poisson equations in an enriched finite element space (either on a finer grid or a higher-order space) for which many fast Poisson solvers can be applied. The nonlinear iteration is only applied to a much smaller size problem and the computational cost using Newton and direct solvers is negligible compared with the cost of the linear problem. The analysis presented here shows that this technique remains the optimal rate of convergence for both the concentration and the chemical potential approximations. The corresponding error estimate obtained in our paper, especially the negative norm error estimates, are non-trivial and different with the existing results in the literatures. PMID:27110063

  9. Isomorphism of dimer configurations and spanning trees on finite square lattices

    NASA Astrophysics Data System (ADS)

    Brankov, J. G.

    1995-09-01

    One-to-one mappings of the close-packed dimer configurations on a finite square lattice with free boundaries L onto the spanning trees of a related graph (or two-graph) G are found. The graph (two-graph) G can be constructed from L by: (1) deleting all the vertices of L with arbitrarily fixed parity of the row and column numbers; (2) suppressing all the vertices of degree 2 except those of degree 2 in L; (3) merging all the vertices of degree 1 into a single vertex g. The matrix Kirchhoff theorem reduces the enumeration problem for the spanning trees on G to the eigenvalue problem for the discrete Laplacian on the square lattice L'=G g with mixed Dirichlet-Neumann boundary conditions in at least one direction. That fact explains some of the unusual finite-size properties of the dimer model.

  10. Fixed Point Results for G-α-Contractive Maps with Application to Boundary Value Problems

    PubMed Central

    Roshan, Jamal Rezaei

    2014-01-01

    We unify the concepts of G-metric, metric-like, and b-metric to define new notion of generalized b-metric-like space and discuss its topological and structural properties. In addition, certain fixed point theorems for two classes of G-α-admissible contractive mappings in such spaces are obtained and some new fixed point results are derived in corresponding partially ordered space. Moreover, some examples and an application to the existence of a solution for the first-order periodic boundary value problem are provided here to illustrate the usability of the obtained results. PMID:24895655

  11. Differential establishment and maintenance of oral ethanol reinforced behavior in Lewis and Fischer 344 inbred rat strains.

    PubMed

    Suzuki, T; George, F R; Meisch, R A

    1988-04-01

    Oral ethanol self-administration was investigated systematically in two inbred strains of rats, Fischer 344 CDF (F-344)/CRLBR (F344) and Lewis LEW/CRLBR (LEW). For both strains ethanol maintained higher response rates and was consumed in larger volumes than the water vehicle. In addition, blood ethanol levels increased with increases in ethanol concentration. However, LEW rats drank substantially more ethanol than F344 rats. The typical inverted U-shaped function between ethanol concentration and number of deliveries was observed for the LEW rats, whereas for the F344 rats much smaller differences were seen between ethanol and water maintained responding. For the LEW strain, as the fixed-ratio size was increased, the number of responses increased almost in direct proportion to the fixed-ratio size increase, so that at least at the lower fixed-ratio values the rats were obtaining similar numbers of deliveries at different fixed-ratio sizes. However, a decrease in ethanol deliveries and blood ethanol levels was observed at higher fixed-ratio sizes. Similar results were obtained in F344 rats, but the amount of responding was lower and less consistent. LEW rats showed significantly higher response rates, numbers of ethanol deliveries and blood ethanol levels. Ethanol-induced behavioral activation also was observed in LEW rats, but not in F344 rats. These results support the conclusion that ethanol serves as a strong positive reinforcer for LEW rats and as a weak positive reinforcer for F344 rats, and that genotype is a determinant of the degree to which ethanol functions as a reinforcer.

  12. Anderson Acceleration for Fixed-Point Iterations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Homer F.

    The purpose of this grant was to support research on acceleration methods for fixed-point iterations, with applications to computational frameworks and simulation problems that are of interest to DOE.

  13. Sizing procedures for sun-tracking PV system with batteries

    NASA Astrophysics Data System (ADS)

    Nezih Gerek, Ömer; Başaran Filik, Ümmühan; Filik, Tansu

    2017-11-01

    Deciding optimum number of PV panels, wind turbines and batteries (i.e. a complete renewable energy system) for minimum cost and complete energy balance is a challenging and interesting problem. In the literature, some rough data models or limited recorded data together with low resolution hourly averaged meteorological values are used to test the sizing strategies. In this study, active sun tracking and fixed PV solar power generation values of ready-to-serve commercial products are recorded throughout 2015-2016. Simultaneously several outdoor parameters (solar radiation, temperature, humidity, wind speed/direction, pressure) are recorded with high resolution. The hourly energy consumption values of a standard 4-person household, which is constructed in our campus in Eskisehir, Turkey, are also recorded for the same period. During sizing, novel parametric random process models for wind speed, temperature, solar radiation, energy demand and electricity generation curves are achieved and it is observed that these models provide sizing results with lower LLP through Monte Carlo experiments that consider average and minimum performance cases. Furthermore, another novel cost optimization strategy is adopted to show that solar tracking PV panels provide lower costs by enabling reduced number of installed batteries. Results are verified over real recorded data.

  14. Optimization of multi-objective integrated process planning and scheduling problem using a priority based optimization algorithm

    NASA Astrophysics Data System (ADS)

    Ausaf, Muhammad Farhan; Gao, Liang; Li, Xinyu

    2015-12-01

    For increasing the overall performance of modern manufacturing systems, effective integration of process planning and scheduling functions has been an important area of consideration among researchers. Owing to the complexity of handling process planning and scheduling simultaneously, most of the research work has been limited to solving the integrated process planning and scheduling (IPPS) problem for a single objective function. As there are many conflicting objectives when dealing with process planning and scheduling, real world problems cannot be fully captured considering only a single objective for optimization. Therefore considering multi-objective IPPS (MOIPPS) problem is inevitable. Unfortunately, only a handful of research papers are available on solving MOIPPS problem. In this paper, an optimization algorithm for solving MOIPPS problem is presented. The proposed algorithm uses a set of dispatching rules coupled with priority assignment to optimize the IPPS problem for various objectives like makespan, total machine load, total tardiness, etc. A fixed sized external archive coupled with a crowding distance mechanism is used to store and maintain the non-dominated solutions. To compare the results with other algorithms, a C-matric based method has been used. Instances from four recent papers have been solved to demonstrate the effectiveness of the proposed algorithm. The experimental results show that the proposed method is an efficient approach for solving the MOIPPS problem.

  15. Miniature intermittent contact switch

    NASA Technical Reports Server (NTRS)

    Sword, A.

    1972-01-01

    Design of electric switch for providing intermittent contact is presented. Switch consists of flexible conductor surrounding, but separated from, fixed conductor. Flexing of outside conductor to contact fixed conductor completes circuit. Advantage is small size of switch compared to standard switches.

  16. Phase-contrast X-ray computed tomography of non-formalin fixed biological objects

    NASA Astrophysics Data System (ADS)

    Takeda, Tohoru; Momose, Atsushi; Wu, Jin; Zeniya, Tsutomu; Yu, Quanwen; Thet-Thet-Lwin; Itai, Yuji

    2001-07-01

    Using a monolithic X-ray interferometer having the view size of 25 mm×25 mm, phase-contrast X-ray CT (PCCT) was performed for non-formalin fixed livers of two normal rats and a rabbit transplanted with VX-2 cancer. PCCT images of liver and cancer lesions resembled well those obtained by formalin fixed samples.

  17. Experimentally reducing clutch size reveals a fixed upper limit to egg size in snakes, evidence from the king ratsnake, Elaphe carinata.

    PubMed

    Ji, Xiang; Du, Wei-Guo; Li, Hong; Lin, Long-Hui

    2006-08-01

    Snakes are free of the pelvic girdle's constraint on maximum offspring size, and therefore present an opportunity to investigate the upper limit to offspring size without the limit imposed by the pelvic girdle dimension. We used the king ratsnake (Elaphe carinata) as a model animal to examine whether follicle ablation may result in enlargement of egg size in snakes and, if so, whether there is a fixed upper limit to egg size. Females with small sized yolking follicles were assigned to three manipulated, one sham-manipulated and one control treatments in mid-May, and two, four or six yolking follicles in the manipulated females were then ablated. Females undergoing follicle ablation produced fewer, but larger as well as more elongated, eggs than control females primarily by increasing egg length. This finding suggests that follicle ablation may result in enlargement of egg size in E. carinata. Mean values for egg width remained almost unchanged across the five treatments, suggesting that egg width is more likely to be shaped by the morphological feature of the oviduct. Clutch mass dropped dramatically in four- and six-follicle ablated females. The function describing the relationship between size and number of eggs reveals that egg size increases with decreasing clutch size at an ever-decreasing rate, with the tangent slope of the function for the six-follicle ablation treatment being -0.04. According to the function describing instantaneous variation in tangent slope, the maximum value of tangent slope should converge towards zero. This result provides evidence that there is a fixed upper limit to egg size in E. carinata.

  18. Parallel Implementation of the Discontinuous Galerkin Method

    NASA Technical Reports Server (NTRS)

    Baggag, Abdalkader; Atkins, Harold; Keyes, David

    1999-01-01

    This paper describes a parallel implementation of the discontinuous Galerkin method. Discontinuous Galerkin is a spatially compact method that retains its accuracy and robustness on non-smooth unstructured grids and is well suited for time dependent simulations. Several parallelization approaches are studied and evaluated. The most natural and symmetric of the approaches has been implemented in all object-oriented code used to simulate aeroacoustic scattering. The parallel implementation is MPI-based and has been tested on various parallel platforms such as the SGI Origin, IBM SP2, and clusters of SGI and Sun workstations. The scalability results presented for the SGI Origin show slightly superlinear speedup on a fixed-size problem due to cache effects.

  19. Solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators.

    PubMed

    Zhao, Jing; Zong, Haili

    2018-01-01

    In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.

  20. A dual method for optimal control problems with initial and final boundary constraints.

    NASA Technical Reports Server (NTRS)

    Pironneau, O.; Polak, E.

    1973-01-01

    This paper presents two new algorithms belonging to the family of dual methods of centers. The first can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states. The second one can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states and with affine instantaneous inequality constraints on the control. Convergence is established for both algorithms. Qualitative reasoning indicates that the rate of convergence is linear.

  1. Finite-time consensus for multi-agent systems with globally bounded convergence time under directed communication graphs

    NASA Astrophysics Data System (ADS)

    Fu, Junjie; Wang, Jin-zhi

    2017-09-01

    In this paper, we study the finite-time consensus problems with globally bounded convergence time also known as fixed-time consensus problems for multi-agent systems subject to directed communication graphs. Two new distributed control strategies are proposed such that leaderless and leader-follower consensus are achieved with convergence time independent on the initial conditions of the agents. Fixed-time formation generation and formation tracking problems are also solved as the generalizations. Simulation examples are provided to demonstrate the performance of the new controllers.

  2. Concentration of Access to Information and Communication Technologies in the Municipalities of the Brazilian Legal Amazon.

    PubMed

    de Brito, Silvana Rossy; da Silva, Aleksandra do Socorro; Cruz, Adejard Gaia; Monteiro, Maurílio de Abreu; Vijaykumar, Nandamudi Lankalapalli; da Silva, Marcelino Silva; Costa, João Crisóstomo Weyl Albuquerque; Francês, Carlos Renato Lisboa

    2016-01-01

    This study fills demand for data on access and use of information and communication technologies (ICT) in the Brazilian legal Amazon, a region of localities with identical economic, political, and social problems. We use the 2010 Brazilian Demographic Census to compile data on urban and rural households (i) with computers and Internet access, (ii) with mobile phones, and (iii) with fixed phones. To compare the concentration of access to ICT in the municipalities of the Brazilian Amazon with other regions of Brazil, we use a concentration index to quantify the concentration of households in the following classes: with computers and Internet access, with mobile phones, with fixed phones, and no access. These data are analyzed along with municipal indicators on income, education, electricity, and population size. The results show that for urban households, the average concentration in the municipalities of the Amazon for computers and Internet access and for fixed phones is lower than in other regions of the country; meanwhile, that for no access and mobile phones is higher than in any other region. For rural households, the average concentration in the municipalities of the Amazon for computers and Internet access, mobile phones, and fixed phones is lower than in any other region of the country; meanwhile, that for no access is higher than in any other region. In addition, the study shows that education and income are determinants of inequality in accessing ICT in Brazilian municipalities and that the existence of electricity in rural households is directly associated with the ownership of ICT resources.

  3. Computer-Based Oral Hygiene Instruction versus Verbal Method in Fixed Orthodontic Patients

    PubMed Central

    Moshkelgosha, V.; Mehrvarz, Sh.; Saki, M.; Golkari, A.

    2017-01-01

    Statement of Problem: Fixed orthodontic appliances in the oral cavity make tooth cleaning procedures more complicated. Objectives: This study aimed to compare the efficacy of computerized oral hygiene instruction with verbal technique among fixed orthodontic patients referred to the evening clinic of Orthodontics of Shiraz Dental School. Materials and Methods: A single-blind study was performed in Orthodontic Department of Shiraz, Islamic Republic of Iran, from January to May 2015 following the demonstrated exclusion and inclusion criteria. The sample size was considered 60 patients with 30 subjects in each group. Bleeding on probing and plaque indices and dental knowledge were assessed in the subjects to determine pre-intervention status. A questionnaire was designed for dental knowledge evaluation. The patients were randomly assigned into the computerized and verbal groups. Three weeks after the oral hygiene instruction, indices of bleeding on probing and plaque index and the dental knowledge were evaluated to investigate post-intervention outcome. The two groups were compared by chi-square and student t tests. The pre- and post-intervention scores in each group were compared using paired t-test. Results: In the computerized group, the mean score for plaque index and bleeding on probing index was significantly decreased while dental health knowledge was significantly increased after oral hygiene instruction, in contrast to the verbal group. Conclusions: Within the limitations of the current study, computerized oral hygiene instruction is proposed to be more effective in providing optimal oral health status compared to the conventional method in fixed orthodontic patients. PMID:28959765

  4. Concentration of Access to Information and Communication Technologies in the Municipalities of the Brazilian Legal Amazon

    PubMed Central

    de Brito, Silvana Rossy; da Silva, Aleksandra do Socorro; Cruz, Adejard Gaia; Monteiro, Maurílio de Abreu; Vijaykumar, Nandamudi Lankalapalli; da Silva, Marcelino Silva; Costa, João Crisóstomo Weyl Albuquerque; Francês, Carlos Renato Lisboa

    2016-01-01

    This study fills demand for data on access and use of information and communication technologies (ICT) in the Brazilian legal Amazon, a region of localities with identical economic, political, and social problems. We use the 2010 Brazilian Demographic Census to compile data on urban and rural households (i) with computers and Internet access, (ii) with mobile phones, and (iii) with fixed phones. To compare the concentration of access to ICT in the municipalities of the Brazilian Amazon with other regions of Brazil, we use a concentration index to quantify the concentration of households in the following classes: with computers and Internet access, with mobile phones, with fixed phones, and no access. These data are analyzed along with municipal indicators on income, education, electricity, and population size. The results show that for urban households, the average concentration in the municipalities of the Amazon for computers and Internet access and for fixed phones is lower than in other regions of the country; meanwhile, that for no access and mobile phones is higher than in any other region. For rural households, the average concentration in the municipalities of the Amazon for computers and Internet access, mobile phones, and fixed phones is lower than in any other region of the country; meanwhile, that for no access is higher than in any other region. In addition, the study shows that education and income are determinants of inequality in accessing ICT in Brazilian municipalities and that the existence of electricity in rural households is directly associated with the ownership of ICT resources. PMID:27035577

  5. Renovation of the fixing and loading factors of the beam by the spectral data of free flexural vibrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akhymbek, Meiram Erkanatuly; Yessirkegenov, Nurgissa Amankeldiuly; Sadybekov, Makhmud Abdysametovich

    2015-09-18

    In the current paper, the problem of bending vibrations of a beam in which the binding on the right end is unknown and not available for visual inspection is studied. The main objective is to study an inverse problem: find additional unknown boundary conditions by additional spectral data, i.e., the conditions of fixing the right end of the rod. In this work, unlike many other works, as such additional conditions we choose the first natural frequencies (eigenvalues) of two new problems corresponding to the problem of bending vibrations of a beam with loads of different weights at the central point.

  6. The free versus fixed geodetic boundary value problem for different combinations of geodetic observables

    NASA Astrophysics Data System (ADS)

    Grafarend, E. W.; Heck, B.; Knickmeyer, E. H.

    1985-03-01

    Various formulations of the geodetic fixed and free boundary value problem are presented, depending upon the type of boundary data. For the free problem, boundary data of type astronomical latitude, astronomical longitude and a pair of the triplet potential, zero and first-order vertical gradient of gravity are presupposed. For the fixed problem, either the potential or gravity or the vertical gradient of gravity is assumed to be given on the boundary. The potential and its derivatives on the boundary surface are linearized with respect to a reference potential and a reference surface by Taylor expansion. The Eulerian and Lagrangean concepts of a perturbation theory of the nonlinear geodetic boundary value problem are reviewed. Finally the boundary value problems are solved by Hilbert space techniques leading to new generalized Stokes and Hotine functions. Reduced Stokes and Hotine functions are recommended for numerical reasons. For the case of a boundary surface representing the topography a base representation of the solution is achieved by solving an infinite dimensional system of equations. This system of equations is obtained by means of the product-sum-formula for scalar surface spherical harmonics with Wigner 3j-coefficients.

  7. Data Assimilation on a Quantum Annealing Computer: Feasibility and Scalability

    NASA Astrophysics Data System (ADS)

    Nearing, G. S.; Halem, M.; Chapman, D. R.; Pelissier, C. S.

    2014-12-01

    Data assimilation is one of the ubiquitous and computationally hard problems in the Earth Sciences. In particular, ensemble-based methods require a large number of model evaluations to estimate the prior probability density over system states, and variational methods require adjoint calculations and iteration to locate the maximum a posteriori solution in the presence of nonlinear models and observation operators. Quantum annealing computers (QAC) like the new D-Wave housed at the NASA Ames Research Center can be used for optimization and sampling, and therefore offers a new possibility for efficiently solving hard data assimilation problems. Coding on the QAC is not straightforward: a problem must be posed as a Quadratic Unconstrained Binary Optimization (QUBO) and mapped to a spherical Chimera graph. We have developed a method for compiling nonlinear 4D-Var problems on the D-Wave that consists of five steps: Emulating the nonlinear model and/or observation function using radial basis functions (RBF) or Chebyshev polynomials. Truncating a Taylor series around each RBF kernel. Reducing the Taylor polynomial to a quadratic using ancilla gadgets. Mapping the real-valued quadratic to a fixed-precision binary quadratic. Mapping the fully coupled binary quadratic to a partially coupled spherical Chimera graph using ancilla gadgets. At present the D-Wave contains 512 qbits (with 1024 and 2048 qbit machines due in the next two years); this machine size allows us to estimate only 3 state variables at each satellite overpass. However, QAC's solve optimization problems using a physical (quantum) system, and therefore do not require iterations or calculation of model adjoints. This has the potential to revolutionize our ability to efficiently perform variational data assimilation, as the size of these computers grows in the coming years.

  8. Dynamical Chaos in the Wisdom-Holman Integrator: Origins and Solutions

    NASA Technical Reports Server (NTRS)

    Rauch, Kevin P.; Holman, Matthew

    1999-01-01

    We examine the nonlinear stability of the Wisdom-Holman (WH) symplectic mapping applied to the integration of perturbed, highly eccentric (e-0.9) two-body orbits. We find that the method is unstable and introduces artificial chaos into the computed trajectories for this class of problems, unless the step size chosen 1s small enough that PeriaPse is always resolved, in which case the method is generically stable. This 'radial orbit instability' persists even for weakly perturbed systems. Using the Stark problem as a fiducial test case, we investigate the dynamical origin of this instability and argue that the numerical chaos results from the overlap of step-size resonances; interestingly, for the Stark-problem many of these resonances appear to be absolutely stable. We similarly examine the robustness of several alternative integration methods: a time-regularized version of the WH mapping suggested by Mikkola; the potential-splitting (PS) method of Duncan, Levison, Lee; and two original methods incorporating approximations based on Stark motion instead of Keplerian motion. The two fixed point problem and a related, more general problem are used to conduct a comparative test of the various methods for several types of motion. Among the algorithms tested, the time-transformed WH mapping is clearly the most efficient and stable method of integrating eccentric, nearly Keplerian orbits in the absence of close encounters. For test particles subject to both high eccentricities and very close encounters, we find an enhanced version of the PS method-incorporating time regularization, force-center switching, and an improved kernel function-to be both economical and highly versatile. We conclude that Stark-based methods are of marginal utility in N-body type integrations. Additional implications for the symplectic integration of N-body systems are discussed.

  9. On the relationship between parallel computation and graph embedding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, A.K.

    1989-01-01

    The problem of efficiently simulating an algorithm designed for an n-processor parallel machine G on an m-processor parallel machine H with n > m arises when parallel algorithms designed for an ideal size machine are simulated on existing machines which are of a fixed size. The author studies this problem when every processor of H takes over the function of a number of processors in G, and he phrases the simulation problem as a graph embedding problem. New embeddings presented address relevant issues arising from the parallel computation environment. The main focus centers around embedding complete binary trees into smaller-sizedmore » binary trees, butterflies, and hypercubes. He also considers simultaneous embeddings of r source machines into a single hypercube. Constant factors play a crucial role in his embeddings since they are not only important in practice but also lead to interesting theoretical problems. All of his embeddings minimize dilation and load, which are the conventional cost measures in graph embeddings and determine the maximum amount of time required to simulate one step of G on H. His embeddings also optimize a new cost measure called ({alpha},{beta})-utilization which characterizes how evenly the processors of H are used by the processors of G. Ideally, the utilization should be balanced (i.e., every processor of H simulates at most (n/m) processors of G) and the ({alpha},{beta})-utilization measures how far off from a balanced utilization the embedding is. He presents embeddings for the situation when some processors of G have different capabilities (e.g. memory or I/O) than others and the processors with different capabilities are to be distributed uniformly among the processors of H. Placing such conditions on an embedding results in an increase in some of the cost measures.« less

  10. Integrated production and distribution scheduling problems related to fixed delivery departure dates and weights of late orders.

    PubMed

    Li, Shanlin; Li, Maoqin

    2015-01-01

    We consider an integrated production and distribution scheduling problem faced by a typical make-to-order manufacturer which relies on a third-party logistics (3PL) provider for finished product delivery to customers. In the beginning of a planning horizon, the manufacturer has received a set of orders to be processed on a single production line. Completed orders are delivered to customers by a finite number of vehicles provided by the 3PL company which follows a fixed daily or weekly shipping schedule such that the vehicles have fixed departure dates which are not part of the decisions. The problem is to find a feasible schedule that minimizes one of the following objective functions when processing times and weights are oppositely ordered: (1) the total weight of late orders and (2) the number of vehicles used subject to the condition that the total weight of late orders is minimum. We show that both problems are solvable in polynomial time.

  11. Integrated Production and Distribution Scheduling Problems Related to Fixed Delivery Departure Dates and Weights of Late Orders

    PubMed Central

    Li, Shanlin; Li, Maoqin

    2015-01-01

    We consider an integrated production and distribution scheduling problem faced by a typical make-to-order manufacturer which relies on a third-party logistics (3PL) provider for finished product delivery to customers. In the beginning of a planning horizon, the manufacturer has received a set of orders to be processed on a single production line. Completed orders are delivered to customers by a finite number of vehicles provided by the 3PL company which follows a fixed daily or weekly shipping schedule such that the vehicles have fixed departure dates which are not part of the decisions. The problem is to find a feasible schedule that minimizes one of the following objective functions when processing times and weights are oppositely ordered: (1) the total weight of late orders and (2) the number of vehicles used subject to the condition that the total weight of late orders is minimum. We show that both problems are solvable in polynomial time. PMID:25785285

  12. Teaching an Old Dog an Old Trick: FREE-FIX and Free-Boundary Axisymmetric MHD Equilibrium

    NASA Astrophysics Data System (ADS)

    Guazzotto, Luca

    2015-11-01

    A common task in plasma physics research is the calculation of an axisymmetric equilibrium for tokamak modeling. The main unknown of the problem is the magnetic poloidal flux ψ. The easiest approach is to assign the shape of the plasma and only solve the equilibrium problem in the plasma / closed-field-lines region (the ``fixed-boundary approach''). Often, one may also need the vacuum fields, i.e. the equilibrium in the open-field-lines region, requiring either coil currents or ψ on some closed curve outside the plasma to be assigned (the ``free-boundary approach''). Going from one approach to the other is a textbook problem, involving the calculation of Green's functions and surface integrals in the plasma. However, no tools are readily available to perform this task. Here we present a code (FREE-FIX) to compute a boundary condition for a free-boundary equilibrium given only the corresponding fixed-boundary equilibrium. An improvement to the standard solution method, allowing for much faster calculations, is presented. Applications are discussed. PPPL fund 245139 and DOE grant G00009102.

  13. Enhanced removal of sulfonamide antibiotics by KOH-activated anthracite coal: Batch and fixed-bed studies.

    PubMed

    Zuo, Linzi; Ai, Jing; Fu, Heyun; Chen, Wei; Zheng, Shourong; Xu, Zhaoyi; Zhu, Dongqiang

    2016-04-01

    The presence of sulfonamide antibiotics in aquatic environments poses potential risks to human health and ecosystems. In the present study, a highly porous activated carbon was prepared by KOH activation of an anthracite coal (Anth-KOH), and its adsorption properties toward two sulfonamides (sulfamethoxazole and sulfapyridine) and three smaller-sized monoaromatics (phenol, 4-nitrophenol and 1,3-dinitrobenzene) were examined in both batch and fixed-bed adsorption experiments to probe the interplay between adsorbate molecular size and adsorbent pore structure. A commercial powder microporous activated carbon (PAC) and a commercial mesoporous carbon (CMK-3) possessing distinct pore properties were included as comparative adsorbents. Among the three adsorbents Anth-KOH exhibited the largest adsorption capacities for all test adsorbates (especially the two sulfonamides) in both batch mode and fixed-bed mode. After being normalized by the adsorbent surface area, the batch adsorption isotherms of sulfonamides on PAC and Anth-KOH were displaced upward relative to the isotherms on CMK-3, likely due to the micropore-filling effect facilitated by the microporosity of adsorbents. In the fixed-bed mode, the surface area-normalized adsorption capacities of Anth-KOH for sulfonamides were close to that of CMK-3, and higher than that of PAC. The irregular, closed micropores of PAC might impede the diffusion of the relatively large-sized sulfonamide molecules and in turn led to lowered fixed-bed adsorption capacities. The overall superior adsorption of sulfonamides on Anth-KOH can be attributed to its large specific surface area (2514 m(2)/g), high pore volume (1.23 cm(3)/g) and large micropore sizes (centered at 2.0 nm). These findings imply that KOH-activated anthracite coal is a promising adsorbent for the removal of sulfonamide antibiotics from aqueous solution. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Autonomous search and surveillance with small fixed wing aircraft

    NASA Astrophysics Data System (ADS)

    McGee, Timothy Garland

    Small unmanned aerial vehicles (UAVs) have the potential to act as low cost tools in a variety of both civilian and military applications including traffic monitoring, border patrol, and search and rescue. While most current operational UAV systems require human operators, advances in autonomy will allow these systems to reach their full potential as sensor platforms. This dissertation specifically focuses on developing advanced control, path planning, search, and image processing techniques that allow small fixed wing aircraft to autonomously collect data. The problems explored were motivated by experience with the development and experimental flight testing of a fleet of small autonomous fixed wing aircraft. These issues, which have not been fully addressed in past work done on ground vehicles or autonomous helicopters, include the influence of wind and turning rate constraints, the non-negligible velocity of ground targets relative to the aircraft velocity, and limitations on sensor size and processing power on small vehicles. Several contributions for the autonomous operation of small fixed wing aircraft are presented. Several sliding surface controllers are designed which extend previous techniques to include variable sliding surface coefficients and the use of spatial vehicle dynamics. These advances eliminate potential singularities in the control laws to follow spatially defined paths and allow smooth transition between controllers. The optimal solution for the problem of path planning through an ordered set of points for an aircraft with a bounded turning rate in the presence of a constant wind is then discussed. Path planning strategies are also explored to guarantee that a searcher will travel within sensing distance of a mobile ground target. This work assumes only a maximum velocity of the target and is designed to succeed for any possible path of the target. Closed-loop approximations of both the path planning and search techniques, using the sliding surface controllers already discussed, are also studied. Finally, a novel method is presented to detect obstacles by segmenting an image into sky and non-sky regions. The feasibility of this method is demonstrated experimentally on an aircraft test bed.

  15. Fusion solution for soldier wearable gunfire detection systems

    NASA Astrophysics Data System (ADS)

    Cakiades, George; Desai, Sachi; Deligeorges, Socrates; Buckland, Bruce E.; George, Jemin

    2012-06-01

    Currently existing acoustic based Gunfire Detection Systems (GDS) such as soldier wearable, vehicle mounted, and fixed site devices provide enemy detection and localization capabilities to the user. However, the solution to the problem of portability versus performance tradeoff remains elusive. The Data Fusion Module (DFM), described herein, is a sensor/platform agnostic software supplemental tool that addresses this tradeoff problem by leveraging existing soldier networks to enhance GDS performance across a Tactical Combat Unit (TCU). The DFM software enhances performance by leveraging all available acoustic GDS information across the TCU synergistically to calculate highly accurate solutions more consistently than any individual GDS in the TCU. The networked sensor architecture provides additional capabilities addressing the multiple shooter/fire-fight problems in addition to sniper detection/localization. The addition of the fusion solution to the overall Size, Weight and Power & Cost (SWaP&C) is zero to negligible. At the end of the first-year effort, the DFM integrated sensor network's performance was impressive showing improvements upwards of 50% in comparison to a single sensor solution. Further improvements are expected when the networked sensor architecture created in this effort is fully exploited.

  16. Primer-Free Aptamer Selection Using A Random DNA Library

    PubMed Central

    Pan, Weihua; Xin, Ping; Patrick, Susan; Dean, Stacey; Keating, Christine; Clawson, Gary

    2010-01-01

    Aptamers are highly structured oligonucleotides (DNA or RNA) that can bind to targets with affinities comparable to antibodies 1. They are identified through an in vitro selection process called Systematic Evolution of Ligands by EXponential enrichment (SELEX) to recognize a wide variety of targets, from small molecules to proteins and other macromolecules 2-4. Aptamers have properties that are well suited for in vivo diagnostic and/or therapeutic applications: Besides good specificity and affinity, they are easily synthesized, survive more rigorous processing conditions, they are poorly immunogenic, and their relatively small size can result in facile penetration of tissues. Aptamers that are identified through the standard SELEX process usually comprise ~80 nucleotides (nt), since they are typically selected from nucleic acid libraries with ~40 nt long randomized regions plus fixed primer sites of ~20 nt on each side. The fixed primer sequences thus can comprise nearly ~50% of the library sequences, and therefore may positively or negatively compromise identification of aptamers in the selection process 3, although bioinformatics approaches suggest that the fixed sequences do not contribute significantly to aptamer structure after selection 5. To address these potential problems, primer sequences have been blocked by complementary oligonucleotides or switched to different sequences midway during the rounds of SELEX 6, or they have been trimmed to 6-9 nt 7, 8. Wen and Gray 9 designed a primer-free genomic SELEX method, in which the primer sequences were completely removed from the library before selection and were then regenerated to allow amplification of the selected genomic fragments. However, to employ the technique, a unique genomic library has to be constructed, which possesses limited diversity, and regeneration after rounds of selection relies on a linear reamplification step. Alternatively, efforts to circumvent problems caused by fixed primer sequences using high efficiency partitioning are met with problems regarding PCR amplification 10. We have developed a primer-free (PF) selection method that significantly simplifies SELEX procedures and effectively eliminates primer-interference problems 11, 12. The protocols work in a straightforward manner. The central random region of the library is purified without extraneous flanking sequences and is bound to a suitable target (for example to a purified protein or complex mixtures such as cell lines). Then the bound sequences are obtained, reunited with flanking sequences, and re-amplified to generate selected sub-libraries. As an example, here we selected aptamers to S100B, a protein marker for melanoma. Binding assays showed Kd s in the 10-7 - 10-8 M range after a few rounds of selection, and we demonstrate that the aptamers function effectively in a sandwich binding format. PMID:20689511

  17. Testing models of parental investment strategy and offspring size in ants.

    PubMed

    Gilboa, Smadar; Nonacs, Peter

    2006-01-01

    Parental investment strategies can be fixed or flexible. A fixed strategy predicts making all offspring a single 'optimal' size. Dynamic models predict flexible strategies with more than one optimal size of offspring. Patterns in the distribution of offspring sizes may thus reveal the investment strategy. Static strategies should produce normal distributions. Dynamic strategies should often result in non-normal distributions. Furthermore, variance in morphological traits should be positively correlated with the length of developmental time the traits are exposed to environmental influences. Finally, the type of deviation from normality (i.e., skewed left or right, or platykurtic) should be correlated with the average offspring size. To test the latter prediction, we used simulations to detect significant departures from normality and categorize distribution types. Data from three species of ants strongly support the predicted patterns for dynamic parental investment. Offspring size distributions are often significantly non-normal. Traits fixed earlier in development, such as head width, are less variable than final body weight. The type of distribution observed correlates with mean female dry weight. The overall support for a dynamic parental investment model has implications for life history theory. Predicted conflicts over parental effort, sex investment ratios, and reproductive skew in cooperative breeders follow from assumptions of static parental investment strategies and omnipresent resource limitations. By contrast, with flexible investment strategies such conflicts can be either absent or maladaptive.

  18. Schedule-induced drinking as functions of interpellet interval and draught size in the Java macaque1

    PubMed Central

    Allen, Joseph D.; Kenshalo, Dan R.

    1978-01-01

    Three Java monkeys received food pellets that were assigned by both ascending and descending series of fixed-time schedules whose values varied between 8 and 256 seconds. The draught size dispensed by a concurrently available water-delivery tube was systematically varied between 1.0 and 0.3 milliliter per lick at various fixed-time values during the second and third series determinations. Session water intake was bitonically related to the interpellet interval and was determined by the interaction of (1) the probability of initiating a drinking bout, which fell off at the highest interpellet intervals and, (2) the size of the bout, which increased directly with increases in interpellet interval. Variations in draught size had little effect on total session intakes, but reduced bout size at draught sizes of 0.5 milliliter and below. Thus, a volume-regulation process of schedule-induced drinking operated generally at the session-intake level, but was limited to higher draught sizes at the bout level. PMID:16812093

  19. Schedule-induced drinking as functions of interpellet interval and draught size in the Java macaque.

    PubMed

    Allen, J D; Kenshalo, D R

    1978-09-01

    Three Java monkeys received food pellets that were assigned by both ascending and descending series of fixed-time schedules whose values varied between 8 and 256 seconds. The draught size dispensed by a concurrently available water-delivery tube was systematically varied between 1.0 and 0.3 milliliter per lick at various fixed-time values during the second and third series determinations. Session water intake was bitonically related to the interpellet interval and was determined by the interaction of (1) the probability of initiating a drinking bout, which fell off at the highest interpellet intervals and, (2) the size of the bout, which increased directly with increases in interpellet interval. Variations in draught size had little effect on total session intakes, but reduced bout size at draught sizes of 0.5 milliliter and below. Thus, a volume-regulation process of schedule-induced drinking operated generally at the session-intake level, but was limited to higher draught sizes at the bout level.

  20. Low-complexity object detection with deep convolutional neural network for embedded systems

    NASA Astrophysics Data System (ADS)

    Tripathi, Subarna; Kang, Byeongkeun; Dane, Gokce; Nguyen, Truong

    2017-09-01

    We investigate low-complexity convolutional neural networks (CNNs) for object detection for embedded vision applications. It is well-known that consolidation of an embedded system for CNN-based object detection is more challenging due to computation and memory requirement comparing with problems like image classification. To achieve these requirements, we design and develop an end-to-end TensorFlow (TF)-based fully-convolutional deep neural network for generic object detection task inspired by one of the fastest framework, YOLO.1 The proposed network predicts the localization of every object by regressing the coordinates of the corresponding bounding box as in YOLO. Hence, the network is able to detect any objects without any limitations in the size of the objects. However, unlike YOLO, all the layers in the proposed network is fully-convolutional. Thus, it is able to take input images of any size. We pick face detection as an use case. We evaluate the proposed model for face detection on FDDB dataset and Widerface dataset. As another use case of generic object detection, we evaluate its performance on PASCAL VOC dataset. The experimental results demonstrate that the proposed network can predict object instances of different sizes and poses in a single frame. Moreover, the results show that the proposed method achieves comparative accuracy comparing with the state-of-the-art CNN-based object detection methods while reducing the model size by 3× and memory-BW by 3 - 4× comparing with one of the best real-time CNN-based object detectors, YOLO. Our 8-bit fixed-point TF-model provides additional 4× memory reduction while keeping the accuracy nearly as good as the floating-point model. Moreover, the fixed- point model is capable of achieving 20× faster inference speed comparing with the floating-point model. Thus, the proposed method is promising for embedded implementations.

  1. Transforce lingual appliances pre-adjusted invisible appliances simplify treatment.

    PubMed

    Clark, William John

    2011-01-01

    Transforce lingual appliances are designed to be used in conjunction with conventional fixed appliances. Lingual arch development is normally followed by bonded fixed appliances to detail the occlusion. Alternatively Transforce appliance treatment is an efficient method of preparing complex malocclusions prior to a finishing stage with invisible appliances. This approach is ideal for adult treatment, using light continuous forces for arch development with appliances that are comfortable to wear. Sagittal and Transverse appliances are designed for arch development in a range of sizes for contracted arches. They can be used to treat all classes of malocclusion and are pre-adjusted fixed/removable devices for non-compliance treatment. Force modules with nickel titanium coil springs enclosed in a tube deliver a gentle, biocompatible continuous force with a long range of action. They are excellent for mixed dentition and ideal for adult arch development. There are multiple sizes for upper and lower arch development and a sizing chart may be placed over a study model for correct selection, eliminating the need for laboratory work.

  2. Strong coupling strategy for fluid-structure interaction problems in supersonic regime via fixed point iteration

    NASA Astrophysics Data System (ADS)

    Storti, Mario A.; Nigro, Norberto M.; Paz, Rodrigo R.; Dalcín, Lisandro D.

    2009-03-01

    In this paper some results on the convergence of the Gauss-Seidel iteration when solving fluid/structure interaction problems with strong coupling via fixed point iteration are presented. The flow-induced vibration of a flat plate aligned with the flow direction at supersonic Mach number is studied. The precision of different predictor schemes and the influence of the partitioned strong coupling on stability is discussed.

  3. Evaluation of the Effect of Non-Current Fixed Assets on Profitability and Asset Management Efficiency

    ERIC Educational Resources Information Center

    Lubyanaya, Alexandra V.; Izmailov, Airat M.; Nikulina, Ekaterina Y.; Shaposhnikov, Vladislav A.

    2016-01-01

    The purpose of this article is to investigate the problem, which stems from non-current fixed assets affecting profitability and asset management efficiency. Tangible assets, intangible assets and financial assets are all included in non-current fixed assets. The aim of the research is to identify the impact of estimates and valuation in…

  4. User's guide to four-body and three-body trajectory optimization programs

    NASA Technical Reports Server (NTRS)

    Pu, C. L.; Edelbaum, T. N.

    1974-01-01

    A collection of computer programs and subroutines written in FORTRAN to calculate 4-body (sun-earth-moon-space) and 3-body (earth-moon-space) optimal trajectories is presented. The programs incorporate a variable step integration technique and a quadrature formula to correct single step errors. The programs provide capability to solve initial value problem, two point boundary value problem of a transfer from a given initial position to a given final position in fixed time, optimal 2-impulse transfer from an earth parking orbit of given inclination to a given final position and velocity in fixed time and optimal 3-impulse transfer from a given position to a given final position and velocity in fixed time.

  5. Coordinate transformations and gauges in the relativistic astronomical reference systems

    NASA Astrophysics Data System (ADS)

    Tao, J.-H.; Huang, T.-Y.; Han, C.-H.

    2000-11-01

    This paper applies a fully post-Newtonian theory (Damour et al. 1991, 1992, 1993, 1994) to the problem of gauge in relativistic reference systems. Gauge fixing is necessary when the precision of time measurement and application reaches 10-16 or better. We give a general procedure for fixing the gauges of gravitational potentials in both the global and local coordinate systems, and for determining the gauge functions in all the coordinate transformations. We demonstrate that gauge fixing in a gravitational N-body problem can be solved by fixing the gauge of the self-gravitational potential of each body and the gauge function in the coordinate transformation between the global and local coordinate systems. We also show that these gauge functions can be chosen to make all the coordinate systems harmonic or any as required, no matter what gauge is chosen for the self-gravitational potential of each body.

  6. International Conference on Fixed Point Theory and Applications (Colloque International Theorie Du Point Fixe et Applications)

    DTIC Science & Technology

    1989-06-09

    Theorem and the Perron - Frobenius Theorem in matrix theory. We use the Hahn-Banach theorem and do not use any fixed-point related concepts. 179 A...games defined b’, tions 87 Isac G. Fixed point theorems on convex cones , generalized pseudo-contractive mappings and the omplementarity problem 89...and (II), af(x) ° denotes the negative polar cone ot of(x). This condition are respectively called "inward" and "outward". Indeed, when X is convex

  7. Hear, Hear!

    ERIC Educational Resources Information Center

    Rittner-Heir, Robbin

    2000-01-01

    Examines the problem of acoustics in school classrooms; the problems it creates for student learning, particularly for students with hearing problems; and the impediments to achieving acceptable acoustical levels for school classrooms. Acoustic guidelines are explored and some remedies for fixing sound problems are highlighted. (GR)

  8. Probability of identity by descent in metapopulations.

    PubMed Central

    Kaj, I; Lascoux, M

    1999-01-01

    Equilibrium probabilities of identity by descent (IBD), for pairs of genes within individuals, for genes between individuals within subpopulations, and for genes between subpopulations are calculated in metapopulation models with fixed or varying colony sizes. A continuous-time analog to the Moran model was used in either case. For fixed-colony size both propagule and migrant pool models were considered. The varying population size model is based on a birth-death-immigration (BDI) process, to which migration between colonies is added. Wright's F statistics are calculated and compared to previous results. Adding between-island migration to the BDI model can have an important effect on the equilibrium probabilities of IBD and on Wright's index. PMID:10388835

  9. An exploratory drilling exhaustion sequence plot program

    USGS Publications Warehouse

    Schuenemeyer, J.H.; Drew, L.J.

    1977-01-01

    The exhaustion sequence plot program computes the conditional area of influence for wells in a specified rectangular region with respect to a fixed-size deposit. The deposit is represented by an ellipse whose size is chosen by the user. The area of influence may be displayed on computer printer plots consisting of a maximum of 10,000 grid points. At each point, a symbol is presented that indicates the probability of that point being exhausted by nearby wells with respect to a fixed-size ellipse. This output gives a pictorial view of the manner in which oil fields are exhausted. In addition, the exhaustion data may be used to estimate the number of deposits remaining in a basin. ?? 1977.

  10. Accurate Natural Trail Detection Using a Combination of a Deep Neural Network and Dynamic Programming.

    PubMed

    Adhikari, Shyam Prasad; Yang, Changju; Slot, Krzysztof; Kim, Hyongsuk

    2018-01-10

    This paper presents a vision sensor-based solution to the challenging problem of detecting and following trails in highly unstructured natural environments like forests, rural areas and mountains, using a combination of a deep neural network and dynamic programming. The deep neural network (DNN) concept has recently emerged as a very effective tool for processing vision sensor signals. A patch-based DNN is trained with supervised data to classify fixed-size image patches into "trail" and "non-trail" categories, and reshaped to a fully convolutional architecture to produce trail segmentation map for arbitrary-sized input images. As trail and non-trail patches do not exhibit clearly defined shapes or forms, the patch-based classifier is prone to misclassification, and produces sub-optimal trail segmentation maps. Dynamic programming is introduced to find an optimal trail on the sub-optimal DNN output map. Experimental results showing accurate trail detection for real-world trail datasets captured with a head mounted vision system are presented.

  11. Size-confined fixed-composition and composition-dependent engineered band gap alloying induces different internal structures in L-cysteine-capped alloyed quaternary CdZnTeS quantum dots

    NASA Astrophysics Data System (ADS)

    Adegoke, Oluwasesan; Park, Enoch Y.

    2016-06-01

    The development of alloyed quantum dot (QD) nanocrystals with attractive optical properties for a wide array of chemical and biological applications is a growing research field. In this work, size-tunable engineered band gap composition-dependent alloying and fixed-composition alloying were employed to fabricate new L-cysteine-capped alloyed quaternary CdZnTeS QDs exhibiting different internal structures. Lattice parameters simulated based on powder X-ray diffraction (PXRD) revealed the internal structure of the composition-dependent alloyed CdxZnyTeS QDs to have a gradient nature, whereas the fixed-composition alloyed QDs exhibited a homogenous internal structure. Transmission electron microscopy (TEM) and dynamic light scattering (DLS) analysis confirmed the size-confined nature and monodispersity of the alloyed nanocrystals. The zeta potential values were within the accepted range of colloidal stability. Circular dichroism (CD) analysis showed that the surface-capped L-cysteine ligand induced electronic and conformational chiroptical changes in the alloyed nanocrystals. The photoluminescence (PL) quantum yield (QY) values of the gradient alloyed QDs were 27-61%, whereas for the homogenous alloyed QDs, the PL QY values were spectacularly high (72-93%). Our work demonstrates that engineered fixed alloying produces homogenous QD nanocrystals with higher PL QY than composition-dependent alloying.

  12. Particle size and morphology of UHMWPE wear debris in failed total knee arthroplasties--a comparison between mobile bearing and fixed bearing knees.

    PubMed

    Huang, Chun-Hsiung; Ho, Fang-Yuan; Ma, Hon-Ming; Yang, Chan-Tsung; Liau, Jiann-Jong; Kao, Hung-Chan; Young, Tai-Horng; Cheng, Cheng-Kung

    2002-09-01

    Osteolysis induced by ultrahigh molecular weight polyethylene wear debris has been recognized as the major cause of long-term failure in total joint arthroplasties. In a previous study, the prevalence of intraoperatively identified osteolysis during primary revision surgery was much higher in mobile bearing knee replacements (47%) than in fixed bearing knee replacements (13%). We postulated that mobile bearing knee implants tend to produce smaller sized particles. In our current study, we compared the particle size and morphology of polyethylene wear debris between failed mobile bearing and fixed bearing knees. Tissue specimens from interfacial and lytic regions were extracted during revision surgery of 10 mobile bearing knees (all of the low contact stress (LCS) design) and 17 fixed bearing knees (10 of the porous-coated anatomic (PCA) and 7 of the Miller/Galante design). Polyethylene particles were isolated from the tissue specimens and examined using both scanning electron microscopy and light-scattering analyses. The LCS mobile bearing knees produced smaller particulate debris (mean equivalent spherical diameter: 0.58 microm in LCS, 1.17 microm in PCA and 5.23 microm in M/G) and more granular debris (mean value: 93% in LCS, 77% in PCA and 15% in M/G).

  13. Anaerobic treatment of winery wastewater in fixed bed reactors.

    PubMed

    Ganesh, Rangaraj; Rajinikanth, Rajagopal; Thanikal, Joseph V; Ramanujam, Ramamoorty Alwar; Torrijos, Michel

    2010-06-01

    The treatment of winery wastewater in three upflow anaerobic fixed-bed reactors (S9, S30 and S40) with low density floating supports of varying size and specific surface area was investigated. A maximum OLR of 42 g/l day with 80 +/- 0.5% removal efficiency was attained in S9, which had supports with the highest specific surface area. It was found that the efficiency of the reactors increased with decrease in size and increase in specific surface area of the support media. Total biomass accumulation in the reactors was also found to vary as a function of specific surface area and size of the support medium. The Stover-Kincannon kinetic model predicted satisfactorily the performance of the reactors. The maximum removal rate constant (U(max)) was 161.3, 99.0 and 77.5 g/l day and the saturation value constant (K(B)) was 162.0, 99.5 and 78.0 g/l day for S9, S30 and S40, respectively. Due to their higher biomass retention potential, the supports used in this study offer great promise as media in anaerobic fixed bed reactors. Anaerobic fixed-bed reactors with these supports can be applied as high-rate systems for the treatment of large volumes of wastewaters typically containing readily biodegradable organics, such as the winery wastewater.

  14. A fully Sinc-Galerkin method for Euler-Bernoulli beam models

    NASA Technical Reports Server (NTRS)

    Smith, R. C.; Bowers, K. L.; Lund, J.

    1990-01-01

    A fully Sinc-Galerkin method in both space and time is presented for fourth-order time-dependent partial differential equations with fixed and cantilever boundary conditions. The Sinc discretizations for the second-order temporal problem and the fourth-order spatial problems are presented. Alternate formulations for variable parameter fourth-order problems are given which prove to be especially useful when applying the forward techniques to parameter recovery problems. The discrete system which corresponds to the time-dependent partial differential equations of interest are then formulated. Computational issues are discussed and a robust and efficient algorithm for solving the resulting matrix system is outlined. Numerical results which highlight the method are given for problems with both analytic and singular solutions as well as fixed and cantilever boundary conditions.

  15. Computational alternatives to obtain time optimal jet engine control. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Basso, R. J.; Leake, R. J.

    1976-01-01

    Two computational methods to determine an open loop time optimal control sequence for a simple single spool turbojet engine are described by a set of nonlinear differential equations. Both methods are modifications of widely accepted algorithms which can solve fixed time unconstrained optimal control problems with a free right end. Constrained problems to be considered have fixed right ends and free time. Dynamic programming is defined on a standard problem and it yields a successive approximation solution to the time optimal problem of interest. A feedback control law is obtained and it is then used to determine the corresponding open loop control sequence. The Fletcher-Reeves conjugate gradient method has been selected for adaptation to solve a nonlinear optimal control problem with state variable and control constraints.

  16. Laser diffraction particle sizing in STRESS

    NASA Astrophysics Data System (ADS)

    Agrawal, Y. C.; Pottsmith, H. C.

    1994-08-01

    An autonomous instrument system for measuring particle size spectra in the sea is described. The instrument records the small-angle scattering characteristics of the particulate ensemble present in water. The small-angle scattering distribution is inverted into size spectra. The discussion of the instrument in this paper is included with a review of the information content of the data. It is noted that the inverse problem is sensitive to the forward model for light scattering employed in the construction of the matrix. The instrument system is validated using monodisperse polystyrene and NIST standard distributions of glass spheres. Data from a long-term deployment on the California shelf during the field experiment Sediment Transport Events on Shelves and Slopes (STRESS) are included. The size distribution in STRESS, measured at a fixed height-above-bed 1.2 m, showed significant variability over time. In particular, the volume distribution sometimes changed from mono-modal to bi-modal during the experiment. The data on particle-size distribution are combined with friction velocity measurements in the current boundary layer to produce a size-dependent estimate of the suspended mass at 10 cm above bottom. It is argued that these concentrations represent the reference concentration at the bed for the smaller size classes. The suspended mass at all sizes shows a strong correlation with wave variance. Using the size distribution, corrections in the optical transmissometry calibration factor are estimated for the duration of the experiment. The change in calibration at 1.2 m above bed (mab) is shown to have a standard error of 30% over the duration of the experiment with a range of 1.8-0.8.

  17. Stresses in Implant-Supported Fixed Complete Dentures with Different Screw-Tightening Sequences and Torque Application Modes.

    PubMed

    Barcellos, Leonardo H; Palmeiro, Marina Lobato; Naconecy, Marcos M; Geremia, Tomás; Cervieri, André; Shinkai, Rosemary S

    2018-05-17

    To compare the effects of different screw-tightening sequences and torque applications on stresses in implant-supported fixed complete dentures supported by five abutments. Strain gauges fixed to the abutments were used to test the sequences 2-4-3-1-5; 1-2-3-4-5; 3-2-4-1-5; and 2-5-4-1-3 with direct 10-Ncm torque or progressive torque (5 + 10 Ncm). Data were analyzed using analysis of variance and standardized effect size. No effects of tightening sequence or torque application were found except for the sequence 3-2-4-1-5 and some small to moderate effect sizes. Screw-tightening sequences and torque application modes have only a marginal effect on residual stresses.

  18. Buyer-vendor coordination for fixed lifetime product with quantity discount under finite production rate

    NASA Astrophysics Data System (ADS)

    Zhang, Qinghong; Luo, Jianwen; Duan, Yongrui

    2016-03-01

    Buyer-vendor coordination has been widely addressed; however, the fixed lifetime of the product is seldom considered. In this paper, we study the coordination of an integrated production-inventory system with quantity discount for a fixed lifetime product under finite production rate and deterministic demand. We first derive the buyer's ordering policy and the vendor's production batch size in decentralised and centralised systems. We then compare the two systems and show the non-coordination of the ordering policies and the production batch sizes. To improve the supply chain efficiency, we propose quantity discount contract and prove that the contract can coordinate the buyer-vendor supply chain. Finally, we present analytically tractable solutions and give a numerical example to illustrate the benefits of the proposed quantity discount strategy.

  19. Fixing health care before it fixes us.

    PubMed

    Kotlikoff, Laurence J

    2009-02-01

    The current American health care system is beyond repair. The problems of the health care system are delineated in this discussion. The current health care system needs to be replaced in its entirety with a new system that provides every American with first-rate, first-tier medicine and that doesn't drive our nation broke. The author describes a 10-point Medical Security System, which he proposes will address the problems of the current health care system.

  20. An improved least cost routing approach for WDM optical network without wavelength converters

    NASA Astrophysics Data System (ADS)

    Bonani, Luiz H.; Forghani-elahabad, Majid

    2016-12-01

    Routing and wavelength assignment (RWA) problem has been an attractive problem in optical networks, and consequently several algorithms have been proposed in the literature to solve this problem. The most known techniques for the dynamic routing subproblem are fixed routing, fixed-alternate routing, and adaptive routing methods. The first one leads to a high blocking probability (BP) and the last one includes a high computational complexity and requires immense backing from the control and management protocols. The second one suggests a trade-off between performance and complexity, and hence we consider it to improve in our work. In fact, considering the RWA problem in a wavelength routed optical network with no wavelength converter, an improved technique is proposed for the routing subproblem in order to decrease the BP of the network. Based on fixed-alternate approach, the first k shortest paths (SPs) between each node pair is determined. We then rearrange the SPs according to a newly defined cost for the links and paths. Upon arriving a connection request, the sorted paths are consecutively checked for an available wavelength according to the most-used technique. We implement our proposed algorithm and the least-hop fixed-alternate algorithm to show how the rearrangement of SPs contributes to a lower BP in the network. The numerical results demonstrate the efficiency of our proposed algorithm in comparison with the others, considering different number of available wavelengths.

  1. Microcanonical entropy for classical systems

    NASA Astrophysics Data System (ADS)

    Franzosi, Roberto

    2018-03-01

    The entropy definition in the microcanonical ensemble is revisited. We propose a novel definition for the microcanonical entropy that resolve the debate on the correct definition of the microcanonical entropy. In particular we show that this entropy definition fixes the problem inherent the exact extensivity of the caloric equation. Furthermore, this entropy reproduces results which are in agreement with the ones predicted with standard Boltzmann entropy when applied to macroscopic systems. On the contrary, the predictions obtained with the standard Boltzmann entropy and with the entropy we propose, are different for small system sizes. Thus, we conclude that the Boltzmann entropy provides a correct description for macroscopic systems whereas extremely small systems should be better described with the entropy that we propose here.

  2. High-performance multiprocessor architecture for a 3-D lattice gas model

    NASA Technical Reports Server (NTRS)

    Lee, F.; Flynn, M.; Morf, M.

    1991-01-01

    The lattice gas method has recently emerged as a promising discrete particle simulation method in areas such as fluid dynamics. We present a very high-performance scalable multiprocessor architecture, called ALGE, proposed for the simulation of a realistic 3-D lattice gas model, Henon's 24-bit FCHC isometric model. Each of these VLSI processors is as powerful as a CRAY-2 for this application. ALGE is scalable in the sense that it achieves linear speedup for both fixed and increasing problem sizes with more processors. The core computation of a lattice gas model consists of many repetitions of two alternating phases: particle collision and propagation. Functional decomposition by symmetry group and virtual move are the respective keys to efficient implementation of collision and propagation.

  3. On the Treatment of Fixed and Sunk Costs in the Principles Textbooks

    ERIC Educational Resources Information Center

    Colander, David

    2004-01-01

    The author argues that, although the standard principles level treatment of fixed and sunk costs has problems, it is logically consistent as long as all fixed costs are assumed to be sunk costs. As long as the instructor makes that assumption clear to students, the costs of making the changes recently suggested by X. Henry Wang and Bill Z. Yang in…

  4. A prospective randomised comparative parallel study of amniotic membrane wound graft in the management of diabetic foot ulcers.

    PubMed

    Zelen, Charles M; Serena, Thomas E; Denoziere, Guilhem; Fetterolf, Donald E

    2013-10-01

    Our purpose was to compare healing characteristics of diabetic foot ulcers treated with dehydrated human amniotic membrane allografts (EpiFix®, MiMedx, Kennesaw, GA) versus standard of care. An IRB-approved, prospective, randomised, single-centre clinical trial was performed. Included were patients with a diabetic foot ulcer of at least 4-week duration without infection having adequate arterial perfusion. Patients were randomised to receive standard care alone or standard care with the addition of EpiFix. Wound size reduction and rates of complete healing after 4 and 6 weeks were evaluated. In the standard care group (n = 12) and the EpiFix group (n = 13) wounds reduced in size by a mean of 32.0% ± 47.3% versus 97.1% ± 7.0% (P < 0.001) after 4 weeks, whereas at 6 weeks wounds were reduced by -1.8% ± 70.3% versus 98.4% ± 5.8% (P < 0.001), standard care versus EpiFix, respectively. After 4 and 6 weeks of treatment the overall healing rate with application of EpiFix was shown to be 77% and 92%, respectively, whereas standard care healed 0% and 8% of the wounds (P < 0.001), respectively. Patients treated with EpiFix achieved superior healing rates over standard treatment alone. These results show that using EpiFix in addition to standard care is efficacious for wound healing. ©2013 The Authors. International Wound Journal published by John Wiley & Sons Ltd and Medicalhelplines.com Inc.

  5. Influence maximization in complex networks through optimal percolation

    NASA Astrophysics Data System (ADS)

    Morone, Flaviano; Makse, Hernan; CUNY Collaboration; CUNY Collaboration

    The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. Reference: F. Morone, H. A. Makse, Nature 524,65-68 (2015)

  6. On the Fair Division of Multiple Stochastic Pies to Multiple Agents within the Nash Bargaining Solution

    PubMed Central

    Karmperis, Athanasios C.; Aravossis, Konstantinos; Tatsiopoulos, Ilias P.; Sotirchos, Anastasios

    2012-01-01

    The fair division of a surplus is one of the most widely examined problems. This paper focuses on bargaining problems with fixed disagreement payoffs where risk-neutral agents have reached an agreement that is the Nash-bargaining solution (NBS). We consider a stochastic environment, in which the overall return consists of multiple pies with uncertain sizes and we examine how these pies can be allocated with fairness among agents. Specifically, fairness is based on the Aristotle’s maxim: “equals should be treated equally and unequals unequally, in proportion to the relevant inequality”. In this context, fairness is achieved when all the individual stochastic surplus shares which are allocated to agents are distributed in proportion to the NBS. We introduce a novel algorithm, which can be used to compute the ratio of each pie that should be allocated to each agent, in order to ensure fairness within a symmetric or asymmetric NBS. PMID:23024752

  7. Sensitivity analysis of heliostat aiming strategies and receiver size on annual thermal production of a molten salt external receiver

    NASA Astrophysics Data System (ADS)

    Servert, Jorge; González, Ana; Gil, Javier; López, Diego; Funes, Jose Felix; Jurado, Alfonso

    2017-06-01

    Even though receiver size and aiming strategy are to be jointly analyzed to optimize the thermal energy that can be extracted from a solar tower receiver, customarily, they have been studied as separated problems. The main reason is the high-level of detail required to define aiming strategies, which are often simplified in annual simulation models. Aiming strategies are usually focused on obtaining a homogeneous heat flux on the central receiver, with the goal to minimize the maximum heat flux value that may lead to damaging it. Some recent studies have addressed the effect of different aiming strategies on different receiver types, but they have only focused on the optical efficiency. The receiver size is also an additional parameter that has to be considered: larger receiver sizes provide a larger aiming surface and a reduction on spillage losses, but require higher investment while penalizing the thermal performance of the receiver due to the greater external convection losses. The present paper presents a sensitivity analysis of both factors for a predefined solar field at a fixed location, using a central receiver and molten salts as HTF. The analysis includes the design point values and annual energy outputs comparing the effect on the optical performance (measured using a spillage factor) and thermal energy production.

  8. A comprehensive approach to reactive power scheduling in restructured power systems

    NASA Astrophysics Data System (ADS)

    Shukla, Meera

    Financial constraints, regulatory pressure, and need for more economical power transfers have increased the loading of interconnected transmission systems. As a consequence, power systems have been operated close to their maximum power transfer capability limits, making the system more vulnerable to voltage instability events. The problem of voltage collapse characterized by a severe local voltage depression is generally believed to be associated with inadequate VAr support at key buses. The goal of reactive power planning is to maintain a high level of voltage security, through installation of properly sized and located reactive sources and their optimal scheduling. In case of vertically-operated power systems, the reactive requirement of the system is normally satisfied by using all of its reactive sources. But in case of different scenarios of restructured power systems, one may consider a fixed amount of exchange of reactive power through tie lines. Reviewed literature suggests a need for optimal scheduling of reactive power generation for fixed inter area reactive power exchange. The present work proposed a novel approach for reactive power source placement and a novel approach for its scheduling. The VAr source placement technique was based on the property of system connectivity. This is followed by development of optimal reactive power dispatch formulation which facilitated fixed inter area tie line reactive power exchange. This formulation used a Line Flow-Based (LFB) model of power flow analysis. The formulation determined the generation schedule for fixed inter area tie line reactive power exchange. Different operating scenarios were studied to analyze the impact of VAr management approach for vertically operated and restructured power systems. The system loadability, losses, generation and the cost of generation were the performance measures to study the impact of VAr management strategy. The novel approach was demonstrated on IEEE 30 bus system.

  9. Inverse size scaling of the nucleolus by a concentration-dependent phase transition.

    PubMed

    Weber, Stephanie C; Brangwynne, Clifford P

    2015-03-02

    Just as organ size typically increases with body size, the size of intracellular structures changes as cells grow and divide. Indeed, many organelles, such as the nucleus [1, 2], mitochondria [3], mitotic spindle [4, 5], and centrosome [6], exhibit size scaling, a phenomenon in which organelle size depends linearly on cell size. However, the mechanisms of organelle size scaling remain unclear. Here, we show that the size of the nucleolus, a membraneless organelle important for cell-size homeostasis [7], is coupled to cell size by an intracellular phase transition. We find that nucleolar size directly scales with cell size in early C. elegans embryos. Surprisingly, however, when embryo size is altered, we observe inverse scaling: nucleolar size increases in small cells and decreases in large cells. We demonstrate that this seemingly contradictory result arises from maternal loading of a fixed number rather than a fixed concentration of nucleolar components, which condense into nucleoli only above a threshold concentration. Our results suggest that the physics of phase transitions can dictate whether an organelle assembles, and, if so, its size, providing a mechanistic link between organelle assembly and cell size. Since the nucleolus is known to play a key role in cell growth, this biophysical readout of cell size could provide a novel feedback mechanism for growth control. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Adolescent mental health and earnings inequalities in adulthood: evidence from the Young-HUNT Study.

    PubMed

    Evensen, Miriam; Lyngstad, Torkild Hovde; Melkevik, Ole; Reneflot, Anne; Mykletun, Arnstein

    2017-02-01

    Previous studies have shown that adolescent mental health problems are associated with lower employment probabilities and risk of unemployment. The evidence on how earnings are affected is much weaker, and few have addressed whether any association reflects unobserved characteristics and whether the consequences of mental health problems vary across the earnings distribution. A population-based Norwegian health survey linked to administrative registry data (N=7885) was used to estimate how adolescents' mental health problems (separate indicators of internalising, conduct, and attention problems and total sum scores) affect earnings (≥30 years) in young adulthood. We used linear regression with fixed-effects models comparing either students within schools or siblings within families. Unconditional quantile regressions were used to explore differentials across the earnings distribution. Mental health problems in adolescence reduce average earnings in adulthood, and associations are robust to control for observed family background and school fixed effects. For some, but not all mental health problems, associations are also robust in sibling fixed-effects models, where all stable family factors are controlled. Further, we found much larger earnings loss below the 25th centile. Adolescent mental health problems reduce adult earnings, especially among individuals in the lower tail of the earnings distribution. Preventing mental health problems in adolescence may increase future earnings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  11. Permeation of Therapeutic Drugs in Different Formulations across the Airway Epithelium In Vitro

    PubMed Central

    Meindl, Claudia; Stranzinger, Sandra; Dzidic, Neira; Salar-Behzadi, Sharareh; Mohr, Stefan; Zimmer, Andreas; Fröhlich, Eleonore

    2015-01-01

    Background Pulmonary drug delivery is characterized by short onset times of the effects and an increased therapeutic ratio compared to oral drug delivery. This delivery route can be used for local as well as for systemic absorption applying drugs as single substance or as a fixed dose combination. Drugs can be delivered as nebulized aerosols or as dry powders. A screening system able to mimic delivery by the different devices might help to assess the drug effect in the different formulations and to identify potential interference between drugs in fixed dose combinations. The present study evaluates manual devices used in animal studies for their suitability for cellular studies. Methods Calu-3 cells were cultured submersed and in air-liquid interface culture and characterized regarding mucus production and transepithelial electrical resistance. The influence of pore size and material of the transwell membranes and of the duration of air-liquid interface culture was assessed. Compounds were applied in solution and as aerosols generated by MicroSprayer IA-1C Aerosolizer or by DP-4 Dry Powder Insufflator using fluorescein and rhodamine 123 as model compounds. Budesonide and formoterol, singly and in combination, served as examples for drugs relevant in pulmonary delivery. Results and Conclusions Membrane material and duration of air-liquid interface culture had no marked effect on mucus production and tightness of the cell monolayer. Co-application of budesonide and formoterol, applied in solution or as aerosol, increased permeation of formoterol across cells in air-liquid interface culture. Problems with the DP-4 Dry Powder Insufflator included compound-specific delivery rates and influence on the tightness of the cell monolayer. These problems were not encountered with the MicroSprayer IA-1C Aerosolizer. The combination of Calu-3 cells and manual aerosol generation devices appears suitable to identify interactions of drugs in fixed drug combination products on permeation. PMID:26274590

  12. PMMA Third-Body Wear after Unicondylar Knee Arthroplasty Decuples the UHMWPE Wear Particle Generation In Vitro

    PubMed Central

    Paulus, Alexander Christoph; Franke, Manja; Kraxenberger, Michael; Schröder, Christian; Jansson, Volkmar

    2015-01-01

    Introduction. Overlooked polymethylmethacrylate after unicondylar knee arthroplasty can be a potential problem, since this might influence the generated wear particle size and morphology. The aim of this study was the analysis of polyethylene wear in a knee wear simulator for changes in size, morphology, and particle number after the addition of third-bodies. Material and Methods. Fixed bearing unicondylar knee prostheses (UKA) were tested in a knee simulator for 5.0 million cycles. Following bone particles were added for 1.5 million cycles, followed by 1.5 million cycles with PMMA particles. A particle analysis by scanning electron microscopy of the lubricant after the cycles was performed. Size and morphology of the generated wear were characterized. Further, the number of particles per 1 million cycles was calculated for each group. Results. The particles of all groups were similar in size and shape. The number of particles in the PMMA group showed 10-fold higher values than in the bone and control group (PMMA: 10.251 × 1012; bone: 1.145 × 1012; control: 1.804 × 1012). Conclusion. The addition of bone or PMMA particles in terms of a third-body wear results in no change of particle size and morphology. PMMA third-bodies generated tenfold elevated particle numbers. This could favor an early aseptic loosening. PMID:25866795

  13. Are There Differences in Gait Mechanics in Patients With A Fixed Versus Mobile Bearing Total Ankle Arthroplasty? A Randomized Trial.

    PubMed

    Queen, Robin M; Franck, Christopher T; Schmitt, Daniel; Adams, Samuel B

    2017-10-01

    Total ankle arthroplasty (TAA) is an alternative to arthrodesis, but no randomized trial has examined whether a fixed bearing or mobile bearing implant provides improved gait mechanics. We wished to determine if fixed- or mobile-bearing TAA results in a larger improvement in pain scores and gait mechanics from before surgery to 1 year after surgery, and to quantify differences in outcomes using statistical analysis and report the standardized effect sizes for such comparisons. Patients with end-stage ankle arthritis who were scheduled for TAA between November 2011 and June 2013 (n = 40; 16 men, 24 women; average age, 63 years; age range, 35-81 years) were prospectively recruited for this study from a single foot and ankle orthopaedic clinic. During this period, 185 patients underwent TAA, with 144 being eligible to participate in this study. Patients were eligible to participate if they were able to meet all study inclusion criteria, which were: no previous diagnosis of rheumatoid arthritis, a contralateral TAA, bilateral ankle arthritis, previous revision TAA, an ankle fusion revision, or able to walk without the use of an assistive device, weight less than 250 pounds (114 kg), a sagittal or coronal plane deformity less than 15°, no presence of avascular necrosis of the distal tibia, no current neuropathy, age older than 35 years, no history of a talar neck fracture, or an avascular talus. Of the 144 eligible patients, 40 consented to participate in our randomized trial. These 40 patients were randomly assigned to either the fixed (n = 20) or mobile bearing implant group (n = 20). Walking speed, bilateral peak dorsiflexion angle, peak plantar flexion angle, sagittal plane ankle ROM, peak ankle inversion angle, peak plantar flexion moment, peak plantar flexion power during stance, peak weight acceptance, and propulsive vertical ground reaction force were analyzed during seven self-selected speed level walking trials for 33 participants using an eight-camera motion analysis system and four force plates. Seven patients were not included in the analysis owing to cancelled surgery (one from each group) and five were lost to followup (four with fixed bearing and one with mobile bearing implants). A series of effect-size calculations and two-sample t-tests comparing postoperative and preoperative increases in outcome variables between implant types were used to determine the differences in the magnitude of improvement between the two patient cohorts from before surgery to 1 year after surgery. The sample size in this study enabled us to detect a standardized shift of 1.01 SDs between group means with 80% power and a type I error rate of 5% for all outcome variables in the study. This randomized trial did not reveal any differences in outcomes between the two implant types under study at the sample size collected. In addition to these results, effect size analysis suggests that changes in outcome differ between implant types by less than 1 SD. Detection of the largest change score or observed effect (propulsive vertical ground reaction force [Fixed: 0.1 ± 0.1; 0.0-1.0; Mobile: 0.0 ± 0.1; 0.0-0.0; p = 0.0.051]) in this study would require a future trial to enroll 66 patients. However, the smallest change score or observed effect (walking speed [Fixed: 0.2 ± 0.3; 0.1-0.4; Mobile: 0.2 ± 0.3; 0.0-0.3; p = 0.742]) requires a sample size of 2336 to detect a significant difference with 80% power at the observed effect sizes. To our knowledge, this is the first randomized study to report the observed effect size comparing improvements in outcome measures between fixed and mobile bearing implant types. This study was statistically powered to detect large effects and descriptively analyze observed effect sizes. Based on our results there were no statistically or clinically meaningful differences between the fixed and mobile bearing implants when examining gait mechanics and pain 1 year after TAA. Level II, therapeutic study.

  14. Peninsula transportation district commission route deviation feasibility study.

    DOT National Transportation Integrated Search

    1998-11-01

    Many urban transit providers are faced with the problem of declining ridership on traditional fixed route services in low density suburban areas. As a result, many fixed route services in such areas are not economically viable for the transit provide...

  15. Fixing the fixed-point system—Applying Dynamic Renormalization Group to systems with long-range interactions

    NASA Astrophysics Data System (ADS)

    Katzav, Eytan

    2013-04-01

    In this paper, a mode of using the Dynamic Renormalization Group (DRG) method is suggested in order to cope with inconsistent results obtained when applying it to a continuous family of one-dimensional nonlocal models. The key observation is that the correct fixed-point dynamical system has to be identified during the analysis in order to account for all the relevant terms that are generated under renormalization. This is well established for static problems, however poorly implemented in dynamical ones. An application of this approach to a nonlocal extension of the Kardar-Parisi-Zhang equation resolves certain problems in one-dimension. Namely, obviously problematic predictions are eliminated and the existing exact analytic results are recovered.

  16. Unary probabilistic and quantum automata on promise problems

    NASA Astrophysics Data System (ADS)

    Gainutdinova, Aida; Yakaryılmaz, Abuzer

    2018-02-01

    We continue the systematic investigation of probabilistic and quantum finite automata (PFAs and QFAs) on promise problems by focusing on unary languages. We show that bounded-error unary QFAs are more powerful than bounded-error unary PFAs, and, contrary to the binary language case, the computational power of Las-Vegas QFAs and bounded-error PFAs is equivalent to the computational power of deterministic finite automata (DFAs). Then, we present a new family of unary promise problems defined with two parameters such that when fixing one parameter QFAs can be exponentially more succinct than PFAs and when fixing the other parameter PFAs can be exponentially more succinct than DFAs.

  17. 46 CFR 108.437 - Pipe sizes and discharge rates for enclosed ventilation systems for rotating electrical equipment.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 4 2011-10-01 2011-10-01 false Pipe sizes and discharge rates for enclosed ventilation... Systems Fixed Carbon Dioxide Fire Extinguishing Systems § 108.437 Pipe sizes and discharge rates for enclosed ventilation systems for rotating electrical equipment. (a) The minimum pipe size for the initial...

  18. Modeling motor vehicle crashes using Poisson-gamma models: examining the effects of low sample mean values and small sample size on the estimation of the fixed dispersion parameter.

    PubMed

    Lord, Dominique

    2006-07-01

    There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum likelihood method. In an attempt to complement the outcome of the simulation study, Poisson-gamma models were fitted to crash data collected in Toronto, Ont. characterized by a low sample mean and small sample size. The study shows that a low sample mean combined with a small sample size can seriously affect the estimation of the dispersion parameter, no matter which estimator is used within the estimation process. The probability the dispersion parameter becomes unreliably estimated increases significantly as the sample mean and sample size decrease. Consequently, the results show that an unreliably estimated dispersion parameter can significantly undermine empirical Bayes (EB) estimates as well as the estimation of confidence intervals for the gamma mean and predicted response. The paper ends with recommendations about minimizing the likelihood of producing Poisson-gamma models with an unreliable dispersion parameter for modeling motor vehicle crashes.

  19. Wall shear stress fixed points in blood flow

    NASA Astrophysics Data System (ADS)

    Arzani, Amirhossein; Shadden, Shawn

    2017-11-01

    Patient-specific computational fluid dynamics produces large datasets, and wall shear stress (WSS) is one of the most important parameters due to its close connection with the biological processes at the wall. While some studies have investigated WSS vectorial features, the WSS fixed points have not received much attention. In this talk, we will discuss the importance of WSS fixed points from three viewpoints. First, we will review how WSS fixed points relate to the flow physics away from the wall. Second, we will discuss how certain types of WSS fixed points lead to high biochemical surface concentration in cardiovascular mass transport problems. Finally, we will introduce a new measure to track the exposure of endothelial cells to WSS fixed points.

  20. Theoretical size distribution of fossil taxa: analysis of a null model.

    PubMed

    Reed, William J; Hughes, Barry D

    2007-03-22

    This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family.

  1. 7 CFR 993.503 - Size category.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Size category. 993.503 Section 993.503 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... categories listed in § 993.515 and fixes the range or the limits of the various size counts. Effective Date...

  2. The Design of a Templated C++ Small Vector Class for Numerical Computing

    NASA Technical Reports Server (NTRS)

    Moran, Patrick J.

    2000-01-01

    We describe the design and implementation of a templated C++ class for vectors. The vector class is templated both for vector length and vector component type; the vector length is fixed at template instantiation time. The vector implementation is such that for a vector of N components of type T, the total number of bytes required by the vector is equal to N * size of (T), where size of is the built-in C operator. The property of having a size no bigger than that required by the components themselves is key in many numerical computing applications, where one may allocate very large arrays of small, fixed-length vectors. In addition to the design trade-offs motivating our fixed-length vector design choice, we review some of the C++ template features essential to an efficient, succinct implementation. In particular, we highlight some of the standard C++ features, such as partial template specialization, that are not supported by all compilers currently. This report provides an inventory listing the relevant support currently provided by some key compilers, as well as test code one can use to verify compiler capabilities.

  3. Influence maximization in complex networks through optimal percolation

    NASA Astrophysics Data System (ADS)

    Morone, Flaviano; Makse, Hernán A.

    2015-08-01

    The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Despite the vast use of heuristic strategies to identify influential spreaders, the problem remains unsolved. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. These are topologically tagged as low-degree nodes surrounded by hierarchical coronas of hubs, and are uncovered only through the optimal collective interplay of all the influencers in the network. The present theoretical framework may hold a larger degree of universality, being applicable to other hard optimization problems exhibiting a continuous transition from a known phase.

  4. Influence maximization in complex networks through optimal percolation.

    PubMed

    Morone, Flaviano; Makse, Hernán A

    2015-08-06

    The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Despite the vast use of heuristic strategies to identify influential spreaders, the problem remains unsolved. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. These are topologically tagged as low-degree nodes surrounded by hierarchical coronas of hubs, and are uncovered only through the optimal collective interplay of all the influencers in the network. The present theoretical framework may hold a larger degree of universality, being applicable to other hard optimization problems exhibiting a continuous transition from a known phase.

  5. Products of random matrices from fixed trace and induced Ginibre ensembles

    NASA Astrophysics Data System (ADS)

    Akemann, Gernot; Cikovic, Milan

    2018-05-01

    We investigate the microcanonical version of the complex induced Ginibre ensemble, by introducing a fixed trace constraint for its second moment. Like for the canonical Ginibre ensemble, its complex eigenvalues can be interpreted as a two-dimensional Coulomb gas, which are now subject to a constraint and a modified, collective confining potential. Despite the lack of determinantal structure in this fixed trace ensemble, we compute all its density correlation functions at finite matrix size and compare to a fixed trace ensemble of normal matrices, representing a different Coulomb gas. Our main tool of investigation is the Laplace transform, that maps back the fixed trace to the induced Ginibre ensemble. Products of random matrices have been used to study the Lyapunov and stability exponents for chaotic dynamical systems, where the latter are based on the complex eigenvalues of the product matrix. Because little is known about the universality of the eigenvalue distribution of such product matrices, we then study the product of m induced Ginibre matrices with a fixed trace constraint—which are clearly non-Gaussian—and M  ‑  m such Ginibre matrices without constraint. Using an m-fold inverse Laplace transform, we obtain a concise result for the spectral density of such a mixed product matrix at finite matrix size, for arbitrary fixed m and M. Very recently local and global universality was proven by the authors and their coworker for a more general, single elliptic fixed trace ensemble in the bulk of the spectrum. Here, we argue that the spectral density of mixed products is in the same universality class as the product of M independent induced Ginibre ensembles.

  6. FUJIFILM X10 white orbs and DeOrbIt

    NASA Astrophysics Data System (ADS)

    Dietz, Henry Gordon

    2013-01-01

    The FUJIFILM X10 is a high-end enthusiast compact digital camera using an unusual sensor design. Unfortunately, upon its Fall 2011 release, the camera quickly became infamous for the uniquely disturbing "white orbs" that often appeared in areas where the sensor was saturated. FUJIFILM's first attempt at a fix was firmware released on February 25, 2012 if it had little effect. In April 2012, a sensor replacement essentially solved the problem. This paper explores the "white orb" phenomenon in detail. After FUJIFILM's attempt at a firmware fix failed, the author decided to create a post-processing tool that automatically could repair existing images. DeOrbIt was released as a free tool on March 7, 2012. To better understand the problem and how to fix it, the WWW form version of the tool logs images, processing parameters, and evaluations by users. The current paper describes the technical problem, the novel computational photography methods used by DeOrbit to repair affected images, and the public perceptions revealed by this experiment.

  7. Improved Time-Lapsed Angular Scattering Microscopy of Single Cells

    NASA Astrophysics Data System (ADS)

    Cannaday, Ashley E.

    By measuring angular scattering patterns from biological samples and fitting them with a Mie theory model, one can estimate the organelle size distribution within many cells. Quantitative organelle sizing of ensembles of cells using this method has been well established. Our goal is to develop the methodology to extend this approach to the single cell level, measuring the angular scattering at multiple time points and estimating the non-nuclear organelle size distribution parameters. The diameters of individual organelle-size beads were successfully extracted using scattering measurements with a minimum deflection angle of 20 degrees. However, the accuracy of size estimates can be limited by the angular range detected. In particular, simulations by our group suggest that, for cell organelle populations with a broader size distribution, the accuracy of size prediction improves substantially if the minimum angle of detection angle is 15 degrees or less. The system was therefore modified to collect scattering angles down to 10 degrees. To confirm experimentally that size predictions will become more stable when lower scattering angles are detected, initial validations were performed on individual polystyrene beads ranging in diameter from 1 to 5 microns. We found that the lower minimum angle enabled the width of this delta-function size distribution to be predicted more accurately. Scattering patterns were then acquired and analyzed from single mouse squamous cell carcinoma cells at multiple time points. The scattering patterns exhibit angular dependencies that look unlike those of any single sphere size, but are well-fit by a broad distribution of sizes, as expected. To determine the fluctuation level in the estimated size distribution due to measurement imperfections alone, formaldehyde-fixed cells were measured. Subsequent measurements on live (non-fixed) cells revealed an order of magnitude greater fluctuation in the estimated sizes compared to fixed cells. With our improved and better-understood approach to single cell angular scattering, we are now capable of reliably detecting changes in organelle size predictions due to biological causes above our measurement error of 20 nm, which enables us to apply our system to future studies of the investigation of various single cell biological processes.

  8. The Impact of Policies Influencing the Demography of Age-Structured Populations: Lessons from Academies of Sciences

    PubMed Central

    Riosmena, Fernando; Winkler-Dworak, Maria; Prskawetz, Alexia; Feichtinger, Gustav

    2013-01-01

    In this paper, we assess the role of policies aimed at regulating the number and age structure of elections on the size and age structure of five European Academies of Sciences. We show the recent pace of ageing and the degree of variation in policies across them and discuss the implications of different policies on the size and age structure of academies. We also illustrate the potential effect of different election regimes (fixed vs. linked) and age structures of election (younger vs. older) by contrasting the steady-state dynamics of different projections of Full Members in each academy into 2070 and measuring the size and age-compositional effect of changing a given policy relative to a status quo policy scenario. Our findings suggest that academies with linked intake (i.e., where the size of the academy below a certain age is fixed and the number of elections is set to the number of members becoming that age) may be a more efficient approach to curb growth without suffering any ageing trade-offs relative to the faster growth of academies electing a fixed number of members per year. We further discuss the implications of our results in the context of stable populations open to migration. PMID:23843677

  9. A Novel Riemannian Metric Based on Riemannian Structure and Scaling Information for Fixed Low-Rank Matrix Completion.

    PubMed

    Mao, Shasha; Xiong, Lin; Jiao, Licheng; Feng, Tian; Yeung, Sai-Kit

    2017-05-01

    Riemannian optimization has been widely used to deal with the fixed low-rank matrix completion problem, and Riemannian metric is a crucial factor of obtaining the search direction in Riemannian optimization. This paper proposes a new Riemannian metric via simultaneously considering the Riemannian geometry structure and the scaling information, which is smoothly varying and invariant along the equivalence class. The proposed metric can make a tradeoff between the Riemannian geometry structure and the scaling information effectively. Essentially, it can be viewed as a generalization of some existing metrics. Based on the proposed Riemanian metric, we also design a Riemannian nonlinear conjugate gradient algorithm, which can efficiently solve the fixed low-rank matrix completion problem. By experimenting on the fixed low-rank matrix completion, collaborative filtering, and image and video recovery, it illustrates that the proposed method is superior to the state-of-the-art methods on the convergence efficiency and the numerical performance.

  10. Trade off between variable and fixed size normalization in orthogonal polynomials based iris recognition system.

    PubMed

    Krishnamoorthi, R; Anna Poorani, G

    2016-01-01

    Iris normalization is an important stage in any iris biometric, as it has a propensity to trim down the consequences of iris distortion. To indemnify the variation in size of the iris owing to the action of stretching or enlarging the pupil in iris acquisition process and camera to eyeball distance, two normalization schemes has been proposed in this work. In the first method, the iris region of interest is normalized by converting the iris into the variable size rectangular model in order to avoid the under samples near the limbus border. In the second method, the iris region of interest is normalized by converting the iris region into a fixed size rectangular model in order to avoid the dimensional discrepancies between the eye images. The performance of the proposed normalization methods is evaluated with orthogonal polynomials based iris recognition in terms of FAR, FRR, GAR, CRR and EER.

  11. Influences on Cocaine Tolerance Assessed under a Multiple Conjunctive Schedule of Reinforcement

    ERIC Educational Resources Information Center

    Yoon, Jin Ho; Branch, Marc N.

    2009-01-01

    Under multiple schedules of reinforcement, previous research has generally observed tolerance to the rate-decreasing effects of cocaine that has been dependent on schedule-parameter size in the context of fixed-ratio (FR) schedules, but not under the context of fixed-interval (FI) schedules of reinforcement. The current experiment examined the…

  12. A prospective randomised comparative parallel study of amniotic membrane wound graft in the management of diabetic foot ulcers

    PubMed Central

    Zelen, Charles M; Serena, Thomas E; Denoziere, Guilhem; Fetterolf, Donald E

    2013-01-01

    Our purpose was to compare healing characteristics of diabetic foot ulcers treated with dehydrated human amniotic membrane allografts (EpiFix®, MiMedx, Kennesaw, GA) versus standard of care. An IRB-approved, prospective, randomised, single-centre clinical trial was performed. Included were patients with a diabetic foot ulcer of at least 4-week duration without infection having adequate arterial perfusion. Patients were randomised to receive standard care alone or standard care with the addition of EpiFix. Wound size reduction and rates of complete healing after 4 and 6 weeks were evaluated. In the standard care group (n = 12) and the EpiFix group (n = 13) wounds reduced in size by a mean of 32·0% ± 47·3% versus 97·1% ± 7·0% (P < 0·001) after 4 weeks, whereas at 6 weeks wounds were reduced by −1·8% ± 70·3% versus 98·4% ± 5·8% (P < 0·001), standard care versus EpiFix, respectively. After 4 and 6 weeks of treatment the overall healing rate with application of EpiFix was shown to be 77% and 92%, respectively, whereas standard care healed 0% and 8% of the wounds (P < 0·001), respectively. Patients treated with EpiFix achieved superior healing rates over standard treatment alone. These results show that using EpiFix in addition to standard care is efficacious for wound healing. PMID:23742102

  13. Constraining response output on conjunctive fixed-ratio 1 fixed-time reinforcement schedules: Effects on the postreinforcement pause.

    PubMed

    Lopez, F; Pereira, C

    1985-03-01

    Two experiments used response-restriction procedures in order to test the independence of the factors determining response rate and the factors determining the size of the postreinforcement pause on interval schedules. Responding was restricted by response-produced blackout or by retracting the lever. In Experiment 1 with a Conjunctive FR 1 FT schedule, the blackout procedure reduced the postreinforcement pause more than the lever-retraction procedure did, and both procedures produced shorter pauses than did the schedule without response restriction. In Experiment 2 the interreinforcement interval was also manipulated, and the size of the pause was an increasing function of the interreinforcement interval, but the rate of increase was lower than that produced by fixed interval schedules of comparable interval durations. The assumption of functional independence of the postreinforcement pause and terminal rate in fixed interval schedules is questioned since data suggest that pause reductions resulted from constraining variation in response number compared to equivalent periodic schedules in which response number was allowed to vary. Copyright © 1985. Published by Elsevier B.V.

  14. Reporting Point and Interval Estimates of Effect-Size for Planned Contrasts: Fixed within Effect Analyses of Variance

    ERIC Educational Resources Information Center

    Robey, Randall R.

    2004-01-01

    The purpose of this tutorial is threefold: (a) review the state of statistical science regarding effect-sizes, (b) illustrate the importance of effect-sizes for interpreting findings in all forms of research and particularly for results of clinical-outcome research, and (c) demonstrate just how easily a criterion on reporting effect-sizes in…

  15. The scaling relationship between baryonic mass and stellar disc size in morphologically late-type galaxies

    NASA Astrophysics Data System (ADS)

    Wu, Po-Feng

    2018-02-01

    Here I report the scaling relationship between the baryonic mass and scale-length of stellar discs for ∼1000 morphologically late-type galaxies. The baryonic mass-size relationship is a single power law R_\\ast ∝ M_b^{0.38} across ∼3 orders of magnitude in baryonic mass. The scatter in size at fixed baryonic mass is nearly constant and there are no outliers. The baryonic mass-size relationship provides a more fundamental description of the structure of the disc than the stellar mass-size relationship. The slope and the scatter of the stellar mass-size relationship can be understood in the context of the baryonic mass-size relationship. For gas-rich galaxies, the stars are no longer a good tracer for the baryons. High-baryonic-mass, gas-rich galaxies appear to be much larger at fixed stellar mass because most of the baryonic content is gas. The stellar mass-size relationship thus deviates from the power-law baryonic relationship, and the scatter increases at the low-stellar-mass end. These extremely gas-rich low-mass galaxies can be classified as ultra-diffuse galaxies based on the structure.

  16. Training set size, scale, and features in Geographic Object-Based Image Analysis of very high resolution unmanned aerial vehicle imagery

    NASA Astrophysics Data System (ADS)

    Ma, Lei; Cheng, Liang; Li, Manchun; Liu, Yongxue; Ma, Xiaoxue

    2015-04-01

    Unmanned Aerial Vehicle (UAV) has been used increasingly for natural resource applications in recent years due to their greater availability and the miniaturization of sensors. In addition, Geographic Object-Based Image Analysis (GEOBIA) has received more attention as a novel paradigm for remote sensing earth observation data. However, GEOBIA generates some new problems compared with pixel-based methods. In this study, we developed a strategy for the semi-automatic optimization of object-based classification, which involves an area-based accuracy assessment that analyzes the relationship between scale and the training set size. We found that the Overall Accuracy (OA) increased as the training set ratio (proportion of the segmented objects used for training) increased when the Segmentation Scale Parameter (SSP) was fixed. The OA increased more slowly as the training set ratio became larger and a similar rule was obtained according to the pixel-based image analysis. The OA decreased as the SSP increased when the training set ratio was fixed. Consequently, the SSP should not be too large during classification using a small training set ratio. By contrast, a large training set ratio is required if classification is performed using a high SSP. In addition, we suggest that the optimal SSP for each class has a high positive correlation with the mean area obtained by manual interpretation, which can be summarized by a linear correlation equation. We expect that these results will be applicable to UAV imagery classification to determine the optimal SSP for each class.

  17. The evolution of complex life cycles when parasite mortality is size- or time-dependent.

    PubMed

    Ball, M A; Parker, G A; Chubb, J C

    2008-07-07

    In complex cycles, helminth larvae in their intermediate hosts typically grow to a fixed size. We define this cessation of growth before transmission to the next host as growth arrest at larval maturity (GALM). Where the larval parasite controls its own growth in the intermediate host, in order that growth eventually arrests, some form of size- or time-dependent increase in its death rate must apply. In contrast, the switch from growth to sexual reproduction in the definitive host can be regulated by constant (time-independent) mortality as in standard life history theory. We here develop a step-wise model for the evolution of complex helminth life cycles through trophic transmission, based on the approach of Parker et al. [2003a. Evolution of complex life cycles in helminth parasites. Nature London 425, 480-484], but which includes size- or time-dependent increase in mortality rate. We assume that the growing larval parasite has two components to its death rate: (i) a constant, size- or time-independent component, and (ii) a component that increases with size or time in the intermediate host. When growth stops at larval maturity, there is a discontinuous change in mortality to a constant (time-independent) rate. This model generates the same optimal size for the parasite larva at GALM in the intermediate host whether the evolutionary approach to the complex life cycle is by adding a new host above the original definitive host (upward incorporation), or below the original definitive host (downward incorporation). We discuss some unexplored problems for cases where complex life cycles evolve through trophic transmission.

  18. Edge detection of optical subaperture image based on improved differential box-counting method

    NASA Astrophysics Data System (ADS)

    Li, Yi; Hui, Mei; Liu, Ming; Dong, Liquan; Kong, Lingqin; Zhao, Yuejin

    2018-01-01

    Optical synthetic aperture imaging technology is an effective approach to improve imaging resolution. Compared with monolithic mirror system, the image of optical synthetic aperture system is often more complex at the edge, and as a result of the existence of gap between segments, which makes stitching becomes a difficult problem. So it is necessary to extract the edge of subaperture image for achieving effective stitching. Fractal dimension as a measure feature can describe image surface texture characteristics, which provides a new approach for edge detection. In our research, an improved differential box-counting method is used to calculate fractal dimension of image, then the obtained fractal dimension is mapped to grayscale image to detect edges. Compared with original differential box-counting method, this method has two improvements as follows: by modifying the box-counting mechanism, a box with a fixed height is replaced by a box with adaptive height, which solves the problem of over-counting the number of boxes covering image intensity surface; an image reconstruction method based on super-resolution convolutional neural network is used to enlarge small size image, which can solve the problem that fractal dimension can't be calculated accurately under the small size image, and this method may well maintain scale invariability of fractal dimension. The experimental results show that the proposed algorithm can effectively eliminate noise and has a lower false detection rate compared with the traditional edge detection algorithms. In addition, this algorithm can maintain the integrity and continuity of image edge in the case of retaining important edge information.

  19. Suprathreshold contrast summation over area using drifting gratings.

    PubMed

    McDougall, Thomas J; Dickinson, J Edwin; Badcock, David R

    2018-04-01

    This study investigated contrast summation over area for moving targets applied to a fixed-size contrast pedestal-a technique originally developed by Meese and Summers (2007) to demonstrate strong spatial summation of contrast for static patterns at suprathreshold contrast levels. Target contrast increments (drifting gratings) were applied to either the entire 20% contrast pedestal (a full fixed-size drifting grating), or in the configuration of a checkerboard pattern in which the target increment was applied to every alternate check region. These checked stimuli are known as "Battenberg patterns" and the sizes of the checks were varied (within a fixed overall area), across conditions, to measure summation behavior. Results showed that sensitivity to an increment covering the full pedestal was significantly higher than that for the Battenberg patterns (areal summation). Two observers showed strong summation across all check sizes (0.71°-3.33°), and for two other observers the summation ratio dropped to levels consistent with probability summation once check size reached 2.00°. Therefore, areal summation with moving targets does operate at high contrast, and is subserved by relatively large receptive fields covering a square area extending up to at least 3.33° × 3.33° for some observers. Previous studies in which the spatial structure of the pedestal and target covaried were unable to demonstrate spatial summation, potentially due to increasing amounts of suppression from gain-control mechanisms which increases as pedestal size increases. This study shows that when this is controlled, by keeping the pedestal the same across all conditions, extensive summation can be demonstrated.

  20. Optimizing trial design in pharmacogenetics research: comparing a fixed parallel group, group sequential, and adaptive selection design on sample size requirements.

    PubMed

    Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit

    2013-01-01

    Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.

  1. Medical Malpractice Reform: A Fix for a Problem Long out of Fashion.

    PubMed

    Kirkner, Richard Mark

    2017-10-01

    State tort reforms have all but relegated the malpractice crisis to the history books. But there's good news for those of you into all things retro: The House of Representatives just voted to fix the malpractice crisis by a 222-197 margin.

  2. The more the heavier? Family size and childhood obesity in the U.S.

    PubMed

    Datar, Ashlesha

    2017-05-01

    Childhood obesity remains a top public health concern and understanding its drivers is important for combating this epidemic. Contemporaneous trends in declining family size and increasing childhood obesity in the U.S. suggest that family size may be a potential contributor, but limited evidence exists. Using data from a national sample of children in the U.S. this study examines whether family size, measured by the number of siblings a child has, is associated with child BMI and obesity, and the possible mechanisms at work. The potential endogeneity of family size is addressed by using several complementary approaches including sequentially introducing of a rich set of controls, subgroup analyses, and estimating school fixed-effects and child fixed-effects models. Results suggest that having more siblings is associated with significantly lower BMI and lower likelihood of obesity. Children with siblings have healthier diets and watch less television. Family mealtimes, less eating out, reduced maternal work, and increased adult supervision of children are potential mechanisms through which family size is protective of childhood obesity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Onsite Calibration of a Precision IPRT Based on Gallium and Gallium-Based Small-Size Eutectic Points

    NASA Astrophysics Data System (ADS)

    Sun, Jianping; Hao, Xiaopeng; Zeng, Fanchao; Zhang, Lin; Fang, Xinyun

    2017-04-01

    Onsite thermometer calibration with temperature scale transfer technology based on fixed points can effectively improve the level of industrial temperature measurement and calibration. The present work performs an onsite calibration of a precision industrial platinum resistance thermometer near room temperature. The calibration is based on a series of small-size eutectic points, including Ga-In (15.7°C), Ga-Sn (20.5°C), Ga-Zn (25.2°C), and a Ga fixed point (29.7°C), developed in a portable multi-point automatic realization apparatus. The temperature plateaus of the Ga-In, Ga-Sn, and Ga-Zn eutectic points and the Ga fixed point last for longer than 2 h, and their reproducibility was better than 5 mK. The device is suitable for calibrating non-detachable temperature sensors in advanced environmental laboratories and industrial fields.

  4. High-Throughput Amplicon-Based Copy Number Detection of 11 Genes in Formalin-Fixed Paraffin-Embedded Ovarian Tumour Samples by MLPA-Seq

    PubMed Central

    Kondrashova, Olga; Love, Clare J.; Lunke, Sebastian; Hsu, Arthur L.; Waring, Paul M.; Taylor, Graham R.

    2015-01-01

    Whilst next generation sequencing can report point mutations in fixed tissue tumour samples reliably, the accurate determination of copy number is more challenging. The conventional Multiplex Ligation-dependent Probe Amplification (MLPA) assay is an effective tool for measurement of gene dosage, but is restricted to around 50 targets due to size resolution of the MLPA probes. By switching from a size-resolved format, to a sequence-resolved format we developed a scalable, high-throughput, quantitative assay. MLPA-seq is capable of detecting deletions, duplications, and amplifications in as little as 5ng of genomic DNA, including from formalin-fixed paraffin-embedded (FFPE) tumour samples. We show that this method can detect BRCA1, BRCA2, ERBB2 and CCNE1 copy number changes in DNA extracted from snap-frozen and FFPE tumour tissue, with 100% sensitivity and >99.5% specificity. PMID:26569395

  5. New horizons in orthodontics & dentofacial orthopedics: fixed Twin Blocks & TransForce lingual appliances.

    PubMed

    Clark, William John

    2011-01-01

    During the 20th century functional appliances evolved from night time wear to more flexible appliances for increased day time wear to full time wear with Twin Block appliances. The current trend is towards fixed functional appliances and this paper introduces the Fixed Twin Block, bonded to the teeth to eliminate problems of compliance in functional therapy. TransForce lingual appliances are pre-activated and may be used in first phase treatment for sagittal and transverse arch development. Alternatively they may be integrated with fixed appliances at any stage of treatment.

  6. Finite-time and fixed-time synchronization analysis of inertial memristive neural networks with time-varying delays.

    PubMed

    Wei, Ruoyu; Cao, Jinde; Alsaedi, Ahmed

    2018-02-01

    This paper investigates the finite-time synchronization and fixed-time synchronization problems of inertial memristive neural networks with time-varying delays. By utilizing the Filippov discontinuous theory and Lyapunov stability theory, several sufficient conditions are derived to ensure finite-time synchronization of inertial memristive neural networks. Then, for the purpose of making the setting time independent of initial condition, we consider the fixed-time synchronization. A novel criterion guaranteeing the fixed-time synchronization of inertial memristive neural networks is derived. Finally, three examples are provided to demonstrate the effectiveness of our main results.

  7. Efficient FPT Algorithms for (Strict) Compatibility of Unrooted Phylogenetic Trees.

    PubMed

    Baste, Julien; Paul, Christophe; Sau, Ignasi; Scornavacca, Celine

    2017-04-01

    In phylogenetics, a central problem is to infer the evolutionary relationships between a set of species X; these relationships are often depicted via a phylogenetic tree-a tree having its leaves labeled bijectively by elements of X and without degree-2 nodes-called the "species tree." One common approach for reconstructing a species tree consists in first constructing several phylogenetic trees from primary data (e.g., DNA sequences originating from some species in X), and then constructing a single phylogenetic tree maximizing the "concordance" with the input trees. The obtained tree is our estimation of the species tree and, when the input trees are defined on overlapping-but not identical-sets of labels, is called "supertree." In this paper, we focus on two problems that are central when combining phylogenetic trees into a supertree: the compatibility and the strict compatibility problems for unrooted phylogenetic trees. These problems are strongly related, respectively, to the notions of "containing as a minor" and "containing as a topological minor" in the graph community. Both problems are known to be fixed parameter tractable in the number of input trees k, by using their expressibility in monadic second-order logic and a reduction to graphs of bounded treewidth. Motivated by the fact that the dependency on k of these algorithms is prohibitively large, we give the first explicit dynamic programming algorithms for solving these problems, both running in time [Formula: see text], where n is the total size of the input.

  8. Implicit solution of Navier-Stokes equations on staggered curvilinear grids using a Newton-Krylov method with a novel analytical Jacobian.

    NASA Astrophysics Data System (ADS)

    Borazjani, Iman; Asgharzadeh, Hafez

    2015-11-01

    Flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates with explicit and semi-implicit schemes. Implicit schemes can be used to overcome these restrictions. However, implementing implicit solver for nonlinear equations including Navier-Stokes is not straightforward. Newton-Krylov subspace methods (NKMs) are one of the most advanced iterative methods to solve non-linear equations such as implicit descritization of the Navier-Stokes equation. The efficiency of NKMs massively depends on the Jacobian formation method, e.g., automatic differentiation is very expensive, and matrix-free methods slow down as the mesh is refined. Analytical Jacobian is inexpensive method, but derivation of analytical Jacobian for Navier-Stokes equation on staggered grid is challenging. The NKM with a novel analytical Jacobian was developed and validated against Taylor-Green vortex and pulsatile flow in a 90 degree bend. The developed method successfully handled the complex geometries such as an intracranial aneurysm with multiple overset grids, and immersed boundaries. It is shown that the NKM with an analytical Jacobian is 3 to 25 times faster than the fixed-point implicit Runge-Kutta method, and more than 100 times faster than automatic differentiation depending on the grid (size) and the flow problem. The developed methods are fully parallelized with parallel efficiency of 80-90% on the problems tested.

  9. Price elasticities of alcohol demand: evidence from Russia.

    PubMed

    Goryakin, Yevgeniy; Roberts, Bayard; McKee, Martin

    2015-03-01

    In this paper, we estimate price elasticities of demand of several types of alcoholic drinks, using 14 rounds of data from the Russia Longitudinal Monitoring Survey-HSE, collected from 1994 until 2009. We deal with potential confounding problems by taking advantage of a large number of control variables, as well as by estimating community fixed effect models. All in all, although alcohol prices do appear to influence consumption behaviour in Russia, in most cases the size of effect is modest. The finding that two particularly problematic drinks-cheap vodka and fortified wine-are substitute goods also suggests that increasing their prices may not lead to smaller alcohol consumption. Therefore, any alcohol pricing policies in Russia must be supplemented with other measures, such as restrictions on numbers of sales outlets or their opening times.

  10. Theoretical size distribution of fossil taxa: analysis of a null model

    PubMed Central

    Reed, William J; Hughes, Barry D

    2007-01-01

    Background This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. Model New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. Conclusion The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family. PMID:17376249

  11. Effects of sample size on KERNEL home range estimates

    USGS Publications Warehouse

    Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.

    1999-01-01

    Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.

  12. Effect of high-pressure homogenization preparation on mean globule size and large-diameter tail of oil-in-water injectable emulsions.

    PubMed

    Peng, Jie; Dong, Wu-Jun; Li, Ling; Xu, Jia-Ming; Jin, Du-Jia; Xia, Xue-Jun; Liu, Yu-Ling

    2015-12-01

    The effect of different high pressure homogenization energy input parameters on mean diameter droplet size (MDS) and droplets with > 5 μm of lipid injectable emulsions were evaluated. All emulsions were prepared at different water bath temperatures or at different rotation speeds and rotor-stator system times, and using different homogenization pressures and numbers of high-pressure system recirculations. The MDS and polydispersity index (PI) value of the emulsions were determined using the dynamic light scattering (DLS) method, and large-diameter tail assessments were performed using the light-obscuration/single particle optical sensing (LO/SPOS) method. Using 1000 bar homogenization pressure and seven recirculations, the energy input parameters related to the rotor-stator system will not have an effect on the final particle size results. When rotor-stator system energy input parameters are fixed, homogenization pressure and recirculation will affect mean particle size and large diameter droplet. Particle size will decrease with increasing homogenization pressure from 400 bar to 1300 bar when homogenization recirculation is fixed; when the homogenization pressure is fixed at 1000 bar, the particle size of both MDS and percent of fat droplets exceeding 5 μm (PFAT 5 ) will decrease with increasing homogenization recirculations, MDS dropped to 173 nm after five cycles and maintained this level, volume-weighted PFAT 5 will drop to 0.038% after three cycles, so the "plateau" of MDS will come up later than that of PFAT 5 , and the optimal particle size is produced when both of them remained at plateau. Excess homogenization recirculation such as nine times under the 1000 bar may lead to PFAT 5 increase to 0.060% rather than a decrease; therefore, the high-pressure homogenization procedure is the key factor affecting the particle size distribution of emulsions. Varying storage conditions (4-25°C) also influenced particle size, especially the PFAT 5 . Copyright © 2015. Published by Elsevier B.V.

  13. Characterizing the size distribution of particles in urban stormwater by use of fixed-point sample-collection methods

    USGS Publications Warehouse

    Selbig, William R.; Bannerman, Roger T.

    2011-01-01

    The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.

  14. Evaluation of aerial survey methods for Dall's sheep

    USGS Publications Warehouse

    Udevitz, Mark S.; Shults, Brad S.; Adams, Layne G.; Kleckner, Christopher

    2006-01-01

    Most Dall's sheep (Ovis dalli dalli) population-monitoring efforts use intensive aerial surveys with no attempt to estimate variance or adjust for potential sightability bias. We used radiocollared sheep to assess factors that could affect sightability of Dall's sheep in standard fixed-wing and helicopter surveys and to evaluate feasibility of methods that might account for sightability bias. Work was conducted in conjunction with annual aerial surveys of Dall's sheep in the western Baird Mountains, Alaska, USA, in 2000–2003. Overall sightability was relatively high compared with other aerial wildlife surveys, with 88% of the available, marked sheep detected in our fixed-wing surveys. Total counts from helicopter surveys were not consistently larger than counts from fixed-wing surveys of the same units, and detection probabilities did not differ for the 2 aircraft types. Our results suggest that total counts from helicopter surveys cannot be used to obtain reliable estimates of detection probabilities for fixed-wing surveys. Groups containing radiocollared sheep often changed in size and composition before they could be observed by a second crew in units that were double-surveyed. Double-observer methods that require determination of which groups were detected by each observer will be infeasible unless survey procedures can be modified so that groups remain more stable between observations. Mean group sizes increased during our study period, and our logistic regression sightability model indicated that detection probabilities increased with group size. Mark–resight estimates of annual population sizes were similar to sightability-model estimates, and confidence intervals overlapped broadly. We recommend the sightability-model approach as the most effective and feasible of the alternatives we considered for monitoring Dall's sheep populations.

  15. Particles size distribution in diluted magnetic fluids

    NASA Astrophysics Data System (ADS)

    Yerin, Constantine V.

    2017-06-01

    Changes in particles and aggregates size distribution in diluted kerosene based magnetic fluids is studied by dynamic light scattering method. It has been found that immediately after dilution in magnetic fluids the system of aggregates with sizes ranging from 100 to 250-1000 nm is formed. In 50-100 h after dilution large aggregates are peptized and in the sample stationary particles and aggregates size distribution is fixed.

  16. Dose-Response Analysis of RNA-Seq Profiles in Archival Formalin-Fixed Paraffin-Embedded (FFPE) Samples.

    EPA Science Inventory

    Use of archival resources has been limited to date by inconsistent methods for genomic profiling of degraded RNA from formalin-fixed paraffin-embedded (FFPE) samples. RNA-sequencing offers a promising way to address this problem. Here we evaluated transcriptomic dose responses us...

  17. Nonlinear Resonance and Duffing's Spring Equation

    ERIC Educational Resources Information Center

    Fay, Temple H.

    2006-01-01

    This note discusses the boundary in the frequency--amplitude plane for boundedness of solutions to the forced spring Duffing type equation. For fixed initial conditions and fixed parameter [epsilon] results are reported of a systematic numerical investigation on the global stability of solutions to the initial value problem as the parameters F and…

  18. Aircraft Pitch Control With Fixed Order LQ Compensators

    NASA Technical Reports Server (NTRS)

    Green, James; Ashokkumar, C. R.; Homaifar, Abdollah

    1997-01-01

    This paper considers a given set of fixed order compensators for aircraft pitch control problem. By augmenting compensator variables to the original state equations of the aircraft, a new dynamic model is considered to seek a LQ controller. While the fixed order compensators can achieve a set of desired poles in a specified region, LQ formulation provides the inherent robustness properties. The time response for ride quality is significantly improved with a set of dynamic compensators.

  19. Aircraft Pitch Control with Fixed Order LQ Compensators

    NASA Technical Reports Server (NTRS)

    Green, James; Ashokkumar, Cr.; Homaifar, A.

    1997-01-01

    This paper considers a given set of fixed order compensators for aircraft pitch control problem. By augmenting compensator variables to the original state equations of the aircraft, a new dynamic model is considered to seek a LQ controller. While the fixed order compensators can achieve a set of desired poles in a specified region, LQ formulation provides the inherent robustness properties. The time response for ride quality is significantly improved with a set of dynamic compensators.

  20. Fluorine-fixing efficiency on calcium-based briquette: pilot experiment, demonstration and promotion.

    PubMed

    Yang, Jiao-lan; Chen, Dong-qing; Li, Shu-min; Yue, Yin-ling; Jin, Xin; Zhao, Bing-cheng; Ying, Bo

    2010-02-05

    The fluorosis derived from coal burning is a very serious problem in China. By using fluorine-fixing technology during coal burning we are able to reduce the release of fluorides in coal at the source in order to reduce pollution to the surrounding environment by coal burning pollutants as well as decrease the intake and accumulating amounts of fluorine in the human body. The aim of this study was to conduct a pilot experiment on calcium-based fluorine-fixing material efficiency during coal burning to demonstrate and promote the technology based on laboratory research. A proper amount of calcium-based fluorine sorbent was added into high-fluorine coal to form briquettes so that the fluorine in high-fluorine coal can be fixed in coal slag and its release into atmosphere reduced. We determined figures on various components in briquettes and fluorine in coal slag as well as the concentrations of indoor air pollutants, including fluoride, sulfur dioxide and respirable particulate matter (RPM), and evaluated the fluorine-fixing efficiency of calcium-based fluorine sorbents and the levels of indoor air pollutants. Pilot experiments on fluorine-fixing efficiency during coal burning as well as its demonstration and promotion were carried out separately in Guiding and Longli Counties of Guizhou Province, two areas with coal burning fluorosis problems. If the calcium-based fluorine sorbent mixed coal was made into honeycomb briquettes the average fluorine-fixing ratio in the pilot experiment was 71.8%. If the burning calcium-based fluorine-fixing bitumite was made into a coalball, the average of fluorine-fixing ratio was 77.3%. The concentration of fluoride, sulfur dioxide and PM10 of indoor air were decreased significantly. There was a 10% increase in the cost of briquettes due to the addition of calcium-based fluorine sorbent. The preparation process of calcium-based fluorine-fixing briquette is simple yet highly flammable and it is applicable to regions with abundant bitumite coal. As a small scale application, villagers may make fluorine-fixing coalballs or briquettes by themselves, achieving the optimum fluorine-fixing efficiency and reducing indoor air pollutants providing environmental and social benefits.

  1. On the complexity and approximability of some Euclidean optimal summing problems

    NASA Astrophysics Data System (ADS)

    Eremeev, A. V.; Kel'manov, A. V.; Pyatkin, A. V.

    2016-10-01

    The complexity status of several well-known discrete optimization problems with the direction of optimization switching from maximum to minimum is analyzed. The task is to find a subset of a finite set of Euclidean points (vectors). In these problems, the objective functions depend either only on the norm of the sum of the elements from the subset or on this norm and the cardinality of the subset. It is proved that, if the dimension of the space is a part of the input, then all these problems are strongly NP-hard. Additionally, it is shown that, if the space dimension is fixed, then all the problems are NP-hard even for dimension 2 (on a plane) and there are no approximation algorithms with a guaranteed accuracy bound for them unless P = NP. It is shown that, if the coordinates of the input points are integer, then all the problems can be solved in pseudopolynomial time in the case of a fixed space dimension.

  2. 76 FR 57677 - Defense Federal Acquisition Regulation Supplement; Increase the Use of Fixed-Price Incentive...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-16

    ... Under Secretary of Defense for Acquisition, Technology, & Logistics (USD(AT&L)), dated November 3, 2010... cost, share lines, and ceiling price. This regulation is not a ``one-size- fits-all'' mandate. However.../optimistic weighted average and ensure that their cost curves do not mirror cost-plus-fixed-fee cost curves...

  3. Bias and Precision of Measures of Association for a Fixed-Effect Multivariate Analysis of Variance Model

    ERIC Educational Resources Information Center

    Kim, Soyoung; Olejnik, Stephen

    2005-01-01

    The sampling distributions of five popular measures of association with and without two bias adjusting methods were examined for the single factor fixed-effects multivariate analysis of variance model. The number of groups, sample sizes, number of outcomes, and the strength of association were manipulated. The results indicate that all five…

  4. The Thinnest Path Problem

    DTIC Science & Technology

    2016-07-22

    their corresponding transmission powers . At first glance, one may wonder whether the thinnest path problem is simply a shortest path problem with the...nature of the shortest path problem. Another aspect that complicates the problem is the choice of the transmission power at each node (within a maximum...fixed transmission power at each node (in this case, the resulting hypergraph degenerates to a standard graph), the thinnest path problem is NP

  5. Selection of floating-point or fixed-point for adaptive noise canceller in somatosensory evoked potential measurement.

    PubMed

    Shen, Chongfei; Liu, Hongtao; Xie, Xb; Luk, Keith Dk; Hu, Yong

    2007-01-01

    Adaptive noise canceller (ANC) has been used to improve signal to noise ratio (SNR) of somsatosensory evoked potential (SEP). In order to efficiently apply the ANC in hardware system, fixed-point algorithm based ANC can achieve fast, cost-efficient construction, and low-power consumption in FPGA design. However, it is still questionable whether the SNR improvement performance by fixed-point algorithm is as good as that by floating-point algorithm. This study is to compare the outputs of ANC by floating-point and fixed-point algorithm ANC when it was applied to SEP signals. The selection of step-size parameter (micro) was found different in fixed-point algorithm from floating-point algorithm. In this simulation study, the outputs of fixed-point ANC showed higher distortion from real SEP signals than that of floating-point ANC. However, the difference would be decreased with increasing micro value. In the optimal selection of micro, fixed-point ANC can get as good results as floating-point algorithm.

  6. [Carbon sequestration in soil particle-sized fractions during reversion of desertification at Mu Us Sand land.

    PubMed

    Ma, Jian Ye; Tong, Xiao Gang; Li, Zhan Bin; Fu, Guang Jun; Li, Jiao; Hasier

    2016-11-18

    The aim of this study was to investigate the effects of carbon sequestration in soil particle-sized fractions during reversion of desertification at Mu Us Sand Land, soil samples were collected from quicksand land, semifixed sand and fixed sand lands that were established by the shrub for 20-55 year-old and the arbor for 20-50 year-old at sand control region of Yulin in Northern Shaanxi Province. The dynamics and sequestration rate of soil organic carbon (SOC) associated with sand, silt and clay were measured by physical fractionation method. The results indicated that, compared with quicksand area, the carbon content in total SOC and all soil particle-sized fractions at bothsand-fixing sand forest lands showed a significant increasing trend, and the maximum carbon content was observed in the top layer of soils. From quicksand to fixed sand land with 55-year-old shrub and 50-year-old arbor, the annual sequestration rate of carbon stock in 0-5 cm soil depth was same in silt by 0.05 Mg·hm -2 ·a -1 . The increase rate of carbon sequestration in sand was 0.05 and 0.08 Mg·hm -2 ·a -1 , and in clay was 0.02 and 0.03 Mg·hm -2 ·a -1 at shrubs and arbors land, respectively. The increase rate of carbon sequestration in 0-20 cm soil layer for all the soil particles was averagely 2.1 times as that of 0-5 cm. At the annual increase rate of carbon, the stock of carbon in sand, silt and clay at the two fixed sand lands were increased by 6.7, 18.1 and 4.4 times after 50-55 year-old reversion of quicksand land to fixed sand. In addition, the average percentages that contributed to accumulation of total SOC by different particles in 0-20 cm soil were in the order of silt carbon (39.7%)≈sand carbon (34.6%) > clay carbon (25.6%). Generally, the soil particle-sized fractions had great carbon sequestration potential during reversion of desertification in Mu Us Sand Land, and the slit and sand were the main fractions for carbon sequestration at both fixed sand lands.

  7. Adaptive fixed-time trajectory tracking control of a stratospheric airship.

    PubMed

    Zheng, Zewei; Feroskhan, Mir; Sun, Liang

    2018-05-01

    This paper addresses the fixed-time trajectory tracking control problem of a stratospheric airship. By extending the method of adding a power integrator to a novel adaptive fixed-time control method, the convergence of a stratospheric airship to its reference trajectory is guaranteed to be achieved within a fixed time. The control algorithm is firstly formulated without the consideration of external disturbances to establish the stability of the closed-loop system in fixed-time and demonstrate that the convergence time of the airship is essentially independent of its initial conditions. Subsequently, a smooth adaptive law is incorporated into the proposed fixed-time control framework to provide the system with robustness to external disturbances. Theoretical analyses demonstrate that under the adaptive fixed-time controller, the tracking errors will converge towards a residual set in fixed-time. The results of a comparative simulation study with other recent methods illustrate the remarkable performance and superiority of the proposed control method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Application of Decomposition to Transportation Network Analysis

    DOT National Transportation Integrated Search

    1976-10-01

    This document reports preliminary results of five potential applications of the decomposition techniques from mathematical programming to transportation network problems. The five application areas are (1) the traffic assignment problem with fixed de...

  9. Post-stratified estimation: with-in strata and total sample size recommendations

    Treesearch

    James A. Westfall; Paul L. Patterson; John W. Coulston

    2011-01-01

    Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...

  10. 46 CFR 108.437 - Pipe sizes and discharge rates for enclosed ventilation systems for rotating electrical equipment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Pipe sizes and discharge rates for enclosed ventilation systems for rotating electrical equipment. 108.437 Section 108.437 Shipping COAST GUARD, DEPARTMENT OF... Systems Fixed Carbon Dioxide Fire Extinguishing Systems § 108.437 Pipe sizes and discharge rates for...

  11. The Effect of Structural Curvings on the Stress Distribution in a Rigidly Fixed Composite Plate under Forced Vibration

    NASA Astrophysics Data System (ADS)

    Zamanov, A. D.

    2002-01-01

    Based on the exact three-dimensional equations of continuum mechanics and the Akbarov-Guz' continuum theory, the problem on forced vibrations of a rectangular plate made of a composite material with a periodically curved structure is formulated. The plate is rigidly fixed along the Ox 1 axis. Using the semi-analytic method of finite elements, a numerical procedure is elaborated for investigating this problem. The numerical results on the effect of structural curvings on the stress distribution in the plate under forced vibrations are analyzed. It is shown that the disturbances of the stress σ22 in a hinge-supported plate are greater than in a rigidly fixed one. Also, it is found that the structural curvings considerably affect the stress distribution in plates both under static and dynamic loading.

  12. The systemic exposure to inhaled beclometasone/formoterol pMDI with valved holding chamber is independent of age and body size.

    PubMed

    Govoni, Mirco; Piccinno, Annalisa; Lucci, Germano; Poli, Gianluigi; Acerbi, Daniela; Baronio, Roberta; Singh, Dave; Kuna, Piotr; Chawes, Bo L K; Bisgaard, Hans

    2015-02-01

    Asthma guidelines recommend prescription of inhaled corticosteroids at a reduced dosage in children compared to older patients in order to minimize the systemic exposure and risk of unwanted side effects. In children, pressurized metered dose inhalers (pMDI) are recommended in combination with a valved holding chamber (VHC) to overcome the problem of coordinating inhalation with actuation. However, the influence of age and body size on the systemic exposure of drugs to be administered via a pMDI with VHC is still not fully elucidated. Therefore, we aimed to compare the systemic exposure to the active ingredients of a fixed combination of beclometasone-dipropionate/formoterol-fumarate administered via pMDI with VHC in children, adolescents and adults. The pharmacokinetics of formoterol and beclometasone-17-monopropionate (active metabolite of beclometasone-dipropionate) was evaluated over 8 h from three studies, each performed in a different age and body size group. Children (7-11 years, n = 20), adolescents (12-17 years, n = 29) and adults (≥18 years, n = 24) received a single dose of beclometasone/formoterol (children: 200 μg/24 μg, adolescents and adults: 400 μg/24 μg) via pMDI with AeroChamber Plus™. The systemic exposure in children in comparison to adolescents was equivalent for formoterol while it was halved for beclometasone-17-monopropionate in accordance with the halved dose of beclometasone administered in children (90% CIs within 0.8-1.25 for formoterol and 0.4-0.625 for beclometasone-17-monopropionate). The systemic exposure to beclometasone-17-monopropionate and formoterol was equivalent between adolescents and adults. The systemic exposure to the active ingredients of a fixed dose combination of beclometasone/formoterol administered via pMDI with AeroChamber Plus™ correlates with the nominal dose independently of patient age and body size. Thus, dose reduction in relation to age when using a pMDI with VHC may be unnecessary for reducing the systemic exposure in children. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Slot angle detecting method for fiber fixed chip

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaquan; Wang, Jiliang; Zhou, Chaochao

    2018-04-01

    The slot angle of fiber fixed chip has a significant impact on performance of photoelectric devices. In order to solve the actual engineering problem, this paper put forward a detecting method based on imaging processing. Because the images have very low contrast that is hardly segmented, so this paper proposes imaging segment methods based on edge character. Then get fixed chip edge line slope k2 and calculate the fiber fixed slot line slope k1, which can be used calculating the slot angle. Lastly, test the repeatability and accuracy of system, which show that this method has very fast operation speed and good robustness. Clearly, it is also satisfied to the actual demand of fiber fixed chip slot angle detection.

  14. The Fourier Imaging X-ray Spectrometer (FIXS) for the Argentinian, Scout-launched satelite de Aplicaciones Cienficas-1 (SAC-1)

    NASA Technical Reports Server (NTRS)

    Dennis, Brian R.; Crannell, Carol JO; Desai, Upendra D.; Orwig, Larry E.; Kiplinger, Alan L.; Schwartz, Richard A.; Hurford, Gordon J.; Emslie, A. Gordon; Machado, Marcos; Wood, Kent

    1988-01-01

    The Fourier Imaging X-ray Spectrometer (FIXS) is one of four instruments on SAC-1, the Argentinian satellite being proposed for launch by NASA on a Scout rocket in 1992/3. The FIXS is designed to provide solar flare images at X-ray energies between 5 and 35 keV. Observations will be made on arcsecond size scales and subsecond time scales of the processes that modify the electron spectrum and the thermal distribution in flaring magnetic structures.

  15. Formaldehyde substitute fixatives: effects on nucleic acid preservation.

    PubMed

    Moelans, Cathy B; Oostenrijk, Daphne; Moons, Michiel J; van Diest, Paul J

    2011-11-01

    In surgical pathology, formalin-fixed paraffin-embedded tissues are increasingly being used as a source of DNA and RNA for molecular assays in addition to histopathological evaluation. However, the commonly used formalin fixative is carcinogenic, and its crosslinking impairs DNA and RNA quality. The suitability of three new presumably less toxic, crosslinking (F-Solv) and non-crosslinking (FineFIX, RCL2) alcohol-based fixatives was tested for routine molecular pathology in comparison with neutral buffered formalin (NBF) as gold standard. Size ladder PCR, epidermal growth factor receptor sequence analysis, microsatellite instability (MSI), chromogenic (CISH), fluorescence in situ hybridisation (FISH) and qPCR were performed. The alcohol-based non-crosslinking fixatives (FineFIX and RCL2) resulted in a higher DNA yield and quality compared with crosslinking fixatives (NBF and F-Solv). Size ladder PCR resulted in a shorter amplicon size (300 bp) for both crosslinking fixatives compared with the non-crosslinking fixatives (400 bp). All four fixatives were directly applicable for MSI and epidermal growth factor receptor sequence analysis. All fixatives except F-Solv showed clear signals in CISH and FISH. RNA yield and quality were superior after non-crosslinking fixation. qPCR resulted in lower Ct values for RCL2 and FineFIX. The alcohol-based non-crosslinking fixatives performed better than crosslinking fixatives with regard to DNA and RNA yield, quality and applicability in molecular diagnostics. Given the higher yield, less starting material may be necessary, thereby increasing the applicability of biopsies for molecular studies.

  16. Water ring-bouncing on repellent singularities.

    PubMed

    Chantelot, Pierre; Mazloomi Moqaddam, Ali; Gauthier, Anaïs; Chikatamarla, Shyam S; Clanet, Christophe; Karlin, Ilya V; Quéré, David

    2018-03-28

    Texturing a flat superhydrophobic substrate with point-like superhydrophobic macrotextures of the same repellency makes impacting water droplets take off as rings, which leads to shorter bouncing times than on a flat substrate. We investigate the contact time reduction on such elementary macrotextures through experiment and simulations. We understand the observations by decomposing the impacting drop reshaped by the defect into sub-units (or blobs) whose size is fixed by the liquid ring width. We test the blob picture by looking at the reduction of contact time for off-centered impacts and for impacts in grooves that produce liquid ribbons where the blob size is fixed by the width of the channel.

  17. Method for correcting imperfections on a surface

    DOEpatents

    Sweatt, William C.; Weed, John W.

    1999-09-07

    A process for producing near perfect optical surfaces. A previously polished optical surface is measured to determine its deviations from the desired perfect surface. A multi-aperture mask is designed based on this measurement and fabricated such that deposition through the mask will correct the deviations in the surface to an acceptable level. Various mask geometries can be used: variable individual aperture sizes using a fixed grid for the apertures or fixed aperture sizes using a variable aperture spacing. The imperfections are filled in using a vacuum deposition process with a very thin thickness of material such as silicon monoxide to produce an amorphous surface that bonds well to a glass substrate.

  18. Technique for fixing a temporalis muscle using a titanium plate to the implanted hydroxyapatite ceramics for bone defects.

    PubMed

    Ono, I; Tateshita, T; Sasaki, T; Matsumoto, M; Kodama, N

    2001-05-01

    We devised a technique to fix the temporalis muscle to the transplanted hydroxyapatite implant by using a titanium plate, which is fixed to the hydroxyapatite ceramic implant by screws and achieves good clinical results. The size, shape, and curvature of the hydroxyapatite ceramic implants were determined according to full-scale models fabricated using the laser lithographic modeling method from computed tomography data. A titanium plate was then fixed with screws on the implant before implantation, and then the temporalis muscle was refixed to the holes at both ends of the plate. The application of this technique reduced the hospitalization time and achieved good results esthetically.

  19. Fixing Stellarator Magnetic Surfaces

    NASA Astrophysics Data System (ADS)

    Hanson, James D.

    1999-11-01

    Magnetic surfaces are a perennial issue for stellarators. The design heuristic of finding a magnetic field with zero perpendicular component on a specified outer surface often yields inner magnetic surfaces with very small resonant islands. However, magnetic fields in the laboratory are not design fields. Island-causing errors can arise from coil placement errors, stray external fields, and design inadequacies such as ignoring coil leads and incomplete characterization of current distributions within the coil pack. The problem addressed is how to eliminate such error-caused islands. I take a perturbation approach, where the zero order field is assumed to have good magnetic surfaces, and comes from a VMEC equilibrium. The perturbation field consists of error and correction pieces. The error correction method is to determine the correction field so that the sum of the error and correction fields gives zero island size at specified rational surfaces. It is particularly important to correctly calculate the island size for a given perturbation field. The method works well with many correction knobs, and a Singular Value Decomposition (SVD) technique is used to determine minimal corrections necessary to eliminate islands.

  20. Global nitrogen overload problem grows critical

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moffat, A.S.

    1998-02-13

    This article discusses a global problem due to man`s intervention in the biosphere resulting from an increased production and usage of products producing nitrogen compounds which can be fixed in ecosystems. This problem was recognized on small scales even in the 1960`s, but recent studies on a more global scale show that the amount of nitrogen compounds in river runoff is strongly related to the use of synthetic fertilizers, fossil-fuel power plants, and automobile emissions. The increased fixed nitrogen load is exceeding the ability of some ecosystems to use or break the compounds down, resulting in a change in themore » types of flora and fauna which are found to inhabit the ecosystems, and leading to decreased biodiversity.« less

  1. Does food insecurity affect parental characteristics and child behavior? Testing mediation effects.

    PubMed

    Huang, Jin; Oshima, Karen M Matta; Kim, Youngmi

    2010-01-01

    Using two waves of data from the Child Development Supplement in the Panel Study of Income Dynamics, this study investigates whether parental characteristics (parenting stress, parental warmth, psychological distress, and parent's self-esteem) mediate household food insecurity's relations with child behavior problems. Fixed-effects analyses examine data from a low-income sample of 416 children from 249 households. This study finds that parenting stress mediates the effects of food insecurity on child behavior problems. However, two robustness tests produce different results from those of the fixed-effects models. This inconsistency suggests that household food insecurity's relations to the two types of child behavior problems need to be investigated further with a different methodology and other measures.

  2. Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin, E-mail: nzcho@kaist.ac.kr

    2015-12-31

    The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problemmore » are presented.« less

  3. Optimal trajectories for the Aeroassisted Flight Experiment. Part 1: Equations of motion in an Earth-fixed system

    NASA Technical Reports Server (NTRS)

    Miele, A.; Zhao, Z. G.; Lee, W. Y.

    1989-01-01

    The determination of optimal trajectories for the aeroassisted flight experiment (AFE) is discussed. The AFE refers to the study of the free flight of an autonomous spacecraft, shuttle-launched and shuttle-recovered. Its purpose is to gather atmospheric entry environmental data for use in designing aeroassisted orbital transfer vehicles (AOTV). It is assumed that: (1) the spacecraft is a particle of constant mass; (2) the Earth is rotating with constant angular velocity; (3) the Earth is an oblate planet, and the gravitational potential depends on both the radial distance and the latitude (harmonics of order higher than four are ignored); and (4) the atmosphere is at rest with respect to the Earth. Under these assumptions, the equations of motion for hypervelocity atmospheric flight (which can be used not only for AFE problems, but also for AOT problems and space shuttle problems) are derived in an Earth-fixed system. Transformation relations are supplied which allow one to pass from quantities computed in an Earth-fixed system to quantities computed in an inertial system, and vice versa.

  4. Cyclic public goods games: Compensated coexistence among mutual cheaters stabilized by optimized penalty taxation

    NASA Astrophysics Data System (ADS)

    Griffin, Christopher; Belmonte, Andrew

    2017-05-01

    We study the problem of stabilized coexistence in a three-species public goods game in which each species simultaneously contributes to one public good while freeloading off another public good ("cheating"). The proportional population growth is governed by an appropriately modified replicator equation, depending on the returns from the public goods and the cost. We show that the replicator dynamic has at most one interior unstable fixed point and that the population becomes dominated by a single species. We then show that by applying an externally imposed penalty, or "tax" on success can stabilize the interior fixed point, allowing for the symbiotic coexistence of all species. We show that the interior fixed point is the point of globally minimal total population growth in both the taxed and untaxed cases. We then formulate an optimal taxation problem and show that it admits a quasilinearization, resulting in novel necessary conditions for the optimal control. In particular, the optimal control problem governing the tax rate must solve a certain second-order ordinary differential equation.

  5. Cyclic public goods games: Compensated coexistence among mutual cheaters stabilized by optimized penalty taxation.

    PubMed

    Griffin, Christopher; Belmonte, Andrew

    2017-05-01

    We study the problem of stabilized coexistence in a three-species public goods game in which each species simultaneously contributes to one public good while freeloading off another public good ("cheating"). The proportional population growth is governed by an appropriately modified replicator equation, depending on the returns from the public goods and the cost. We show that the replicator dynamic has at most one interior unstable fixed point and that the population becomes dominated by a single species. We then show that by applying an externally imposed penalty, or "tax" on success can stabilize the interior fixed point, allowing for the symbiotic coexistence of all species. We show that the interior fixed point is the point of globally minimal total population growth in both the taxed and untaxed cases. We then formulate an optimal taxation problem and show that it admits a quasilinearization, resulting in novel necessary conditions for the optimal control. In particular, the optimal control problem governing the tax rate must solve a certain second-order ordinary differential equation.

  6. SPH for impact force and ricochet behavior of water-entry bodies

    NASA Astrophysics Data System (ADS)

    Omidvar, Pourya; Farghadani, Omid; Nikeghbali, Pooyan

    The numerical modeling of fluid interaction with a bouncing body has many applications in scientific and engineering application. In this paper, the problem of water impact of a body on free-surface is investigated, where the fixed ghost boundary condition is added to the open source code SPHysics2D1 to rectify the oscillations in pressure distributions with the repulsive boundary condition. First, after introducing the methodology of SPH and the option of boundary conditions, the still water problem is simulated using two types of boundary conditions. It is shown that the fixed ghost boundary condition gives a better result for a hydrostatics pressure. Then, the dam-break problem, which is a bench mark test case in SPH, is simulated and compared with available data. In order to show the behavior of the hydrostatics forces on bodies, a fix/floating cylinder is placed on free surface looking carefully at the force and heaving profile. Finally, the impact of a body on free-surface is successfully simulated for different impact angles and velocities.

  7. Parameterized Algorithmics for Finding Exact Solutions of NP-Hard Biological Problems.

    PubMed

    Hüffner, Falk; Komusiewicz, Christian; Niedermeier, Rolf; Wernicke, Sebastian

    2017-01-01

    Fixed-parameter algorithms are designed to efficiently find optimal solutions to some computationally hard (NP-hard) problems by identifying and exploiting "small" problem-specific parameters. We survey practical techniques to develop such algorithms. Each technique is introduced and supported by case studies of applications to biological problems, with additional pointers to experimental results.

  8. Introduction to the IWA task group on biofilm modeling.

    PubMed

    Noguera, D R; Morgenroth, E

    2004-01-01

    An International Water Association (IWA) Task Group on Biofilm Modeling was created with the purpose of comparatively evaluating different biofilm modeling approaches. The task group developed three benchmark problems for this comparison, and used a diversity of modeling techniques that included analytical, pseudo-analytical, and numerical solutions to the biofilm problems. Models in one, two, and three dimensional domains were also compared. The first benchmark problem (BM1) described a monospecies biofilm growing in a completely mixed reactor environment and had the purpose of comparing the ability of the models to predict substrate fluxes and concentrations for a biofilm system of fixed total biomass and fixed biomass density. The second problem (BM2) represented a situation in which substrate mass transport by convection was influenced by the hydrodynamic conditions of the liquid in contact with the biofilm. The third problem (BM3) was designed to compare the ability of the models to simulate multispecies and multisubstrate biofilms. These three benchmark problems allowed identification of the specific advantages and disadvantages of each modeling approach. A detailed presentation of the comparative analyses for each problem is provided elsewhere in these proceedings.

  9. Thermodynamics fundamentals of energy conversion

    NASA Astrophysics Data System (ADS)

    Dan, Nicolae

    The work reported in the chapters 1-5 focuses on the fundamentals of heat transfer, fluid dynamics, thermodynamics and electrical phenomena related to the conversion of one form of energy to another. Chapter 6 is a re-examination of the fundamental heat transfer problem of how to connect a finite-size heat generating volume to a concentrated sink. Chapter 1 extends to electrical machines the combined thermodynamics and heat transfer optimization approach that has been developed for heat engines. The conversion efficiency at maximum power is 1/2. When, as in specific applications, the operating temperature of windings must not exceed a specified level, the power output is lower and efficiency higher. Chapter 2 addresses the fundamental problem of determining the optimal history (regime of operation) of a battery so that the work output is maximum. Chapters 3 and 4 report the energy conversion aspects of an expanding mixture of hot particles, steam and liquid water. At the elemental level, steam annuli develop around the spherical drops as time increases. At the mixture level, the density decreases while the pressure and velocity increases. Chapter 4 describes numerically, based on the finite element method, the time evolution of the expanding mixture of hot spherical particles, steam and water. The fluid particles are moved in time in a Lagrangian manner to simulate the change of the domain configuration. Chapter 5 describes the process of thermal interaction between the molten material and water. In the second part of the chapter the model accounts for the irreversibility due to the flow of the mixture through the cracks of the mixing vessel. The approach presented in this chapter is based on exergy analysis and represents a departure from the line of inquiry that was followed in chapters 3-4. Chapter 6 shows that the geometry of the heat flow path between a volume and one point can be optimized in two fundamentally different ways. In the "growth" method the structure is optimized starting from the smallest volume element of fixed size. In "design" method the overall volume is fixed, and the designer works "inward" by increasing the internal complexity of the paths for heat flow.

  10. --No Title--

    Science.gov Websites

    {box-sizing:border-box}.fix{background-color:#ff0}.bio-title{color:#5e6a71;font-size:20px;margin-top:0 ,.8);color:#fff;padding:1em;position:absolute;text-align:left}h3 .more{color:#fff;font-size:65%;font -weight:400}.hpfeat .header{background-color:#00a3e4;border-bottom:5px solid #000;color:#000;font-size

  11. Marginal adaptation of mineral trioxide aggregate (MTA) compared with amalgam as a root-end filling material: a low-vacuum (LV) versus high-vacuum (HV) SEM study.

    PubMed

    Shipper, G; Grossman, E S; Botha, A J; Cleaton-Jones, P E

    2004-05-01

    To compare the marginal adaptation of mineral trioxide aggregate (MTA) or amalgam root-end fillings in extracted teeth under low-vacuum (LV) versus high-vacuum (HV) scanning electron microscope (SEM) viewing conditions. Root-end fillings were placed in 20 extracted single-rooted maxillary teeth. Ten root ends were filled with MTA and the other 10 root ends were filled with amalgam. Two 1 mm thick transverse sections of each root-end filling were cut 0.50 mm (top) and 1.50 mm (bottom) from the apex. Gap size was recorded at eight fixed points along the dentine-filling material interface on each section when uncoated wet (LV wet (LVW)) and dry under LV (0.3 Torr) in a JEOL JSM-5800 SEM and backscatter emission (LV dry uncoated (LVDU)). The sections were then air-dried, gold-coated and gap size was recorded once again at the fixed points under HV (10(-6) Torr; HV dry coated (HVDC)). Specimen cracking, and the size and extent of the crack were noted. Gap sizes at fixed points were smallest under LVW and largest under HVDC SEM conditions. Gaps were smallest in MTA root-end fillings. A General Linear Models Analysis, with gap size as the dependent variable, showed significant effects for extent of crack in dentine, material and viewing condition (P = 0.0001). This study showed that MTA produced a superior marginal adaptation to amalgam, and that LVW conditions showed the lowest gap size. Gap size was influenced by the method of SEM viewing. If only HV SEM viewing conditions are used for MTA and amalgam root-end fillings, a correction factor of 3.5 and 2.2, respectively, may be used to enable relative comparisons of gap size to LVW conditions.

  12. Nonparametric estimation and testing of fixed effects panel data models

    PubMed Central

    Henderson, Daniel J.; Carroll, Raymond J.; Li, Qi

    2009-01-01

    In this paper we consider the problem of estimating nonparametric panel data models with fixed effects. We introduce an iterative nonparametric kernel estimator. We also extend the estimation method to the case of a semiparametric partially linear fixed effects model. To determine whether a parametric, semiparametric or nonparametric model is appropriate, we propose test statistics to test between the three alternatives in practice. We further propose a test statistic for testing the null hypothesis of random effects against fixed effects in a nonparametric panel data regression model. Simulations are used to examine the finite sample performance of the proposed estimators and the test statistics. PMID:19444335

  13. Adaptive fixed-time control for cluster synchronisation of coupled complex networks with uncertain disturbances

    NASA Astrophysics Data System (ADS)

    Jiang, Shengqin; Lu, Xiaobo; Cai, Guoliang; Cai, Shuiming

    2017-12-01

    This paper focuses on the cluster synchronisation problem of coupled complex networks with uncertain disturbances under an adaptive fixed-time control strategy. To begin with, complex dynamical networks with community structure which are subject to uncertain disturbances are taken into account. Then, a novel adaptive control strategy combined with fixed-time techniques is proposed to guarantee the nodes in the communities to desired states in a settling time. In addition, the stability of complex error systems is theoretically proved based on Lyapunov stability theorem. At last, two examples are presented to verify the effectiveness of the proposed adaptive fixed-time control.

  14. Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness

    NASA Astrophysics Data System (ADS)

    Kusuma, K. K.; Maruf, A.

    2016-02-01

    Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.

  15. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different approaches that have been introduced in literature for solving large linear systems for interpolation of scattered data points. For very large systems, exact methods such as Gaussian elimination are impractical since they require 0(n(exp 3)) time and 0(n(exp 2)) storage. As Billings et al. suggested, we use an iterative approach. In particular, we use the SYMMLQ method, for solving the large but sparse ordinary kriging systems that result from tapering. The main technical issue that need to be overcome in our algorithmic solution is that the points' covariance matrix for kriging should be symmetric positive definite. The goal of tapering is to obtain a sparse approximate representation of the covariance matrix while maintaining its positive definiteness. Furrer et al. used tapering to obtain a sparse linear system of the form Ax = b, where A is the tapered symmetric positive definite covariance matrix. Thus, Cholesky factorization could be used to solve their linear systems. They implemented an efficient sparse Cholesky decomposition method. They also showed if these tapers are used for a limited class of covariance models, the solution of the system converges to the solution of the original system. Matrix A in the ordinary kriging system, while symmetric, is not positive definite. Thus, their approach is not applicable to the ordinary kriging system. Therefore, we use tapering only to obtain a sparse linear system. Then, we use SYMMLQ to solve the ordinary kriging system. We show that solving large kriging systems becomes practical via tapering and iterative methods, and results in lower estimation errors compared to traditional local approaches, and significant memory savings compared to the original global system. We also developed a more efficient variant of the sparse SYMMLQ method for large ordinary kriging systems. This approach adaptively finds the correct local neighborhood for each query point in the interpolation process.

  16. New, Novice or Nervous? The "Quick" Guide to the "No-Quick-Fix"

    ERIC Educational Resources Information Center

    Teaching History, 2016

    2016-01-01

    "Teaching History" presents "New, Novice or Nervous (NNN)" for those new to the published writings of history teachers. Each problem newcomers wrestle with is one other teachers have wrestled with too. Quick fixes do not exist. But in others' writing, there is something better: "conversations in which other history…

  17. Fixed and equilibrium endpoint problems in uneven-aged stand management

    Treesearch

    Robert G. Haight; Wayne M. Getz

    1987-01-01

    Studies in uneven-aged management have concentrated on the determination of optimal steady-state diameter distribution harvest policies for single and mixed species stands. To find optimal transition harvests for irregular stands, either fixed endpoint or equilibrium endpoint constraints can be imposed after finite transition periods. Penalty function and gradient...

  18. Performance Problems in Service Contracting

    DTIC Science & Technology

    1988-01-01

    technical manual for the U.S. Army. Contract types have run the gamut from firm fixed price to various forms of cost plus arrangements, and award has been...in a National Forest to producing a technical manual for the U.S. Army. Contract types have run the gamut from firm fixed price to various forms of

  19. Differential games.

    NASA Technical Reports Server (NTRS)

    Varaiya, P. P.

    1972-01-01

    General discussion of the theory of differential games with two players and zero sum. Games starting at a fixed initial state and ending at a fixed final time are analyzed. Strategies for the games are defined. The existence of saddle values and saddle points is considered. A stochastic version of a differential game is used to examine the synthesis problem.

  20. Development of an optimized protocol for the detection of classical swine fever virus in formalin-fixed, paraffin-embedded tissues by seminested reverse transcription-polymerase chain reaction and comparison with in situ hybridization.

    PubMed

    Ha, S-K; Choi, C; Chae, C

    2004-10-01

    An optimized protocol was developed for the detection of classical swine fever virus (CSFV) in formalin-fixed, paraffin-embedded tissues obtained from experimentally and naturally infected pigs by seminested reverse transcription-polymerase chain reaction (RT-PCR). The results for seminested RT-PCR were compared with those determined by in situ hybridization. The results obtained show that the use of deparaffinization with xylene, digestion with proteinase K, extraction with Trizol LS, followed by seminested RT-PCR is a reliable detection method. An increase in sensitivity was observed as amplicon size decreased. The highest sensitivity for RT-PCR on formalin-fixed, paraffin-embedded tissues RNA was obtained with amplicon sizes less than approximately 200 base pairs. An hybridization signal for CSFV was detected in lymph nodes from 12 experimentally and 12 naturally infected pigs. The sensitivity of seminested RT-PCR compared with in situ hybridization was 100% for CSFV. When only formalin-fixed tissues are available, seminested RT-PCR and in situ hybridization would be useful diagnostic methods for the detection of CSFV nucleic acid.

  1. Stochastic oscillations in models of epidemics on a network of cities

    NASA Astrophysics Data System (ADS)

    Rozhnova, G.; Nunes, A.; McKane, A. J.

    2011-11-01

    We carry out an analytic investigation of stochastic oscillations in a susceptible-infected-recovered model of disease spread on a network of n cities. In the model a fraction fjk of individuals from city k commute to city j, where they may infect, or be infected by, others. Starting from a continuous-time Markov description of the model the deterministic equations, which are valid in the limit when the population of each city is infinite, are recovered. The stochastic fluctuations about the fixed point of these equations are derived by use of the van Kampen system-size expansion. The fixed point structure of the deterministic equations is remarkably simple: A unique nontrivial fixed point always exists and has the feature that the fraction of susceptible, infected, and recovered individuals is the same for each city irrespective of its size. We find that the stochastic fluctuations have an analogously simple dynamics: All oscillations have a single frequency, equal to that found in the one-city case. We interpret this phenomenon in terms of the properties of the spectrum of the matrix of the linear approximation of the deterministic equations at the fixed point.

  2. Young Women’s Dynamic Family Size Preferences in the Context of Transitioning Fertility

    PubMed Central

    Yeatman, Sara; Sennott, Christie; Culpepper, Steven

    2013-01-01

    Dynamic theories of family size preferences posit that they are not a fixed and stable goal but rather are akin to a moving target that changes within individuals over time. Nonetheless, in high-fertility contexts, changes in family size preferences tend to be attributed to low construct validity and measurement error instead of genuine revisions in preferences. To address the appropriateness of this incongruity, the present study examines evidence for the sequential model of fertility among a sample of young Malawian women living in a context of transitioning fertility. Using eight waves of closely spaced data and fixed-effects models, we find that these women frequently change their reported family size preferences and that these changes are often associated with changes in their relationship and reproductive circumstances. The predictability of change gives credence to the argument that ideal family size is a meaningful construct, even in this higher-fertility setting. Changes are not equally predictable across all women, however, and gamma regression results demonstrate that women for whom reproduction is a more distant goal change their fertility preferences in less-predictable ways. PMID:23619999

  3. Young women's dynamic family size preferences in the context of transitioning fertility.

    PubMed

    Yeatman, Sara; Sennott, Christie; Culpepper, Steven

    2013-10-01

    Dynamic theories of family size preferences posit that they are not a fixed and stable goal but rather are akin to a moving target that changes within individuals over time. Nonetheless, in high-fertility contexts, changes in family size preferences tend to be attributed to low construct validity and measurement error instead of genuine revisions in preferences. To address the appropriateness of this incongruity, the present study examines evidence for the sequential model of fertility among a sample of young Malawian women living in a context of transitioning fertility. Using eight waves of closely spaced data and fixed-effects models, we find that these women frequently change their reported family size preferences and that these changes are often associated with changes in their relationship and reproductive circumstances. The predictability of change gives credence to the argument that ideal family size is a meaningful construct, even in this higher-fertility setting. Changes are not equally predictable across all women, however, and gamma regression results demonstrate that women for whom reproduction is a more distant goal change their fertility preferences in less-predictable ways.

  4. Introducing TreeCollapse: a novel greedy algorithm to solve the cophylogeny reconstruction problem.

    PubMed

    Drinkwater, Benjamin; Charleston, Michael A

    2014-01-01

    Cophylogeny mapping is used to uncover deep coevolutionary associations between two or more phylogenetic histories at a macro coevolutionary scale. As cophylogeny mapping is NP-Hard, this technique relies heavily on heuristics to solve all but the most trivial cases. One notable approach utilises a metaheuristic to search only a subset of the exponential number of fixed node orderings possible for the phylogenetic histories in question. This is of particular interest as it is the only known heuristic that guarantees biologically feasible solutions. This has enabled research to focus on larger coevolutionary systems, such as coevolutionary associations between figs and their pollinator wasps, including over 200 taxa. Although able to converge on solutions for problem instances of this size, a reduction from the current cubic running time is required to handle larger systems, such as Wolbachia and their insect hosts. Rather than solving this underlying problem optimally this work presents a greedy algorithm called TreeCollapse, which uses common topological patterns to recover an approximation of the coevolutionary history where the internal node ordering is fixed. This approach offers a significant speed-up compared to previous methods, running in linear time. This algorithm has been applied to over 100 well-known coevolutionary systems converging on Pareto optimal solutions in over 68% of test cases, even where in some cases the Pareto optimal solution has not previously been recoverable. Further, while TreeCollapse applies a local search technique, it can guarantee solutions are biologically feasible, making this the fastest method that can provide such a guarantee. As a result, we argue that the newly proposed algorithm is a valuable addition to the field of coevolutionary research. Not only does it offer a significantly faster method to estimate the cost of cophylogeny mappings but by using this approach, in conjunction with existing heuristics, it can assist in recovering a larger subset of the Pareto front than has previously been possible.

  5. Accuracy of self-reported versus actual online gambling wins and losses.

    PubMed

    Braverman, Julia; Tom, Matthew A; Shaffer, Howard J

    2014-09-01

    This study is the first to compare the accuracy of self-reported with actual monetary outcomes of online fixed odds sports betting, live action sports betting, and online casino gambling at the individual level of analysis. Subscribers to bwin.party digital entertainment's online gambling service volunteered to respond to the Brief Bio-Social Gambling Screen and questions about their estimated gambling results on specific games for the last 3 or 12 months. We compared the estimated results of each subscriber with his or her actual betting results data. On average, between 34% and 40% of the participants expressed a favorable distortion of their gambling outcomes (i.e., they underestimated losses or overestimated gains) depending on the time period and game. The size of the discrepancy between actual and self-reported results was consistently associated with the self-reported presence of gambling-related problems. However, the specific direction of the reported discrepancy (i.e., favorable vs. unfavorable bias) was not associated with gambling-related problems. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  6. Chaos in a restricted problem of rotation of a rigid body with a fixed point

    NASA Astrophysics Data System (ADS)

    Borisov, A. V.; Kilin, A. A.; Mamaev, I. S.

    2008-06-01

    In this paper, we consider the transition to chaos in the phase portrait of a restricted problem of rotation of a rigid body with a fixed point. Two interrelated mechanisms responsible for chaotization are indicated: (1) the growth of the homoclinic structure and (2) the development of cascades of period doubling bifurcations. On the zero level of the area integral, an adiabatic behavior of the system (as the energy tends to zero) is noted. Meander tori induced by the break of the torsion property of the mapping are found.

  7. Visual Acuity Using Head-fixed Displays During Passive Self and Surround Motion

    NASA Technical Reports Server (NTRS)

    Wood, Scott J.; Black, F. Owen; Stallings, Valerie; Peters, Brian

    2007-01-01

    The ability to read head-fixed displays on various motion platforms requires the suppression of vestibulo-ocular reflexes. This study examined dynamic visual acuity while viewing a head-fixed display during different self and surround rotation conditions. Twelve healthy subjects were asked to report the orientation of Landolt C optotypes presented on a micro-display fixed to a rotating chair at 50 cm distance. Acuity thresholds were determined by the lowest size at which the subjects correctly identified 3 of 5 optotype orientations at peak velocity. Visual acuity was compared across four different conditions, each tested at 0.05 and 0.4 Hz (peak amplitude of 57 deg/s). The four conditions included: subject rotated in semi-darkness (i.e., limited to background illumination of the display), subject stationary while visual scene rotated, subject rotated around a stationary visual background, and both subject and visual scene rotated together. Visual acuity performance was greatest when the subject rotated around a stationary visual background; i.e., when both vestibular and visual inputs provided concordant information about the motion. Visual acuity performance was most reduced when the subject and visual scene rotated together; i.e., when the visual scene provided discordant information about the motion. Ranges of 4-5 logMAR step sizes across the conditions indicated the acuity task was sufficient to discriminate visual performance levels. The background visual scene can influence the ability to read head-fixed displays during passive motion disturbances. Dynamic visual acuity using head-fixed displays can provide an operationally relevant screening tool for visual performance during exposure to novel acceleration environments.

  8. Extension of the SIESTA MHD equilibrium code to free-plasma-boundary problems

    DOE PAGES

    Peraza-Rodriguez, Hugo; Reynolds-Barredo, J. M.; Sanchez, Raul; ...

    2017-08-28

    Here, SIESTA is a recently developed MHD equilibrium code designed to perform fast and accurate calculations of ideal MHD equilibria for three-dimensional magnetic configurations. Since SIESTA does not assume closed magnetic surfaces, the solution can exhibit magnetic islands and stochastic regions. In its original implementation SIESTA addressed only fixed-boundary problems. That is, the shape of the plasma edge, assumed to be a magnetic surface, was kept fixed as the solution iteratively converges to equilibrium. This condition somewhat restricts the possible applications of SIESTA. In this paper we discuss an extension that will enable SIESTA to address free-plasma-boundary problems, opening upmore » the possibility of investigating problems in which the plasma boundary is perturbed either externally or internally. As an illustration, SIESTA is applied to a configuration of the W7-X stellarator.« less

  9. Extension of the SIESTA MHD equilibrium code to free-plasma-boundary problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peraza-Rodriguez, Hugo; Reynolds-Barredo, J. M.; Sanchez, Raul

    Here, SIESTA is a recently developed MHD equilibrium code designed to perform fast and accurate calculations of ideal MHD equilibria for three-dimensional magnetic configurations. Since SIESTA does not assume closed magnetic surfaces, the solution can exhibit magnetic islands and stochastic regions. In its original implementation SIESTA addressed only fixed-boundary problems. That is, the shape of the plasma edge, assumed to be a magnetic surface, was kept fixed as the solution iteratively converges to equilibrium. This condition somewhat restricts the possible applications of SIESTA. In this paper we discuss an extension that will enable SIESTA to address free-plasma-boundary problems, opening upmore » the possibility of investigating problems in which the plasma boundary is perturbed either externally or internally. As an illustration, SIESTA is applied to a configuration of the W7-X stellarator.« less

  10. Accuracy of six elastic impression materials used for complete-arch fixed partial dentures.

    PubMed

    Stauffer, J P; Meyer, J M; Nally, J N

    1976-04-01

    1. The accuracy of four types of impression materials used to make a complete-arch fixed partial denture was evaluated by visual comparison and indirect measurement methods. 2. None of the tested materials allows safe finishing of a complete-arch fixed partial denture on a cast poured from one single master impression. 3. All of the tested materials can be used for impressions for a complete-arch fixed partial denture provided it is not finished on one single cast. Errors can be avoided by making a new impression with the fitted castings in place. Assembly and soldering should be done on the second cast. 4. In making the master fixed partial denture for this study, inaccurate soldering was a problem that was overcome with the use of epoxy glue. Hence, soldering seems to be a major source of inaccuracy for every fixed partial denture.

  11. Statistical estimation via convex optimization for trending and performance monitoring

    NASA Astrophysics Data System (ADS)

    Samar, Sikandar

    This thesis presents an optimization-based statistical estimation approach to find unknown trends in noisy data. A Bayesian framework is used to explicitly take into account prior information about the trends via trend models and constraints. The main focus is on convex formulation of the Bayesian estimation problem, which allows efficient computation of (globally) optimal estimates. There are two main parts of this thesis. The first part formulates trend estimation in systems described by known detailed models as a convex optimization problem. Statistically optimal estimates are then obtained by maximizing a concave log-likelihood function subject to convex constraints. We consider the problem of increasing problem dimension as more measurements become available, and introduce a moving horizon framework to enable recursive estimation of the unknown trend by solving a fixed size convex optimization problem at each horizon. We also present a distributed estimation framework, based on the dual decomposition method, for a system formed by a network of complex sensors with local (convex) estimation. Two specific applications of the convex optimization-based Bayesian estimation approach are described in the second part of the thesis. Batch estimation for parametric diagnostics in a flight control simulation of a space launch vehicle is shown to detect incipient fault trends despite the natural masking properties of feedback in the guidance and control loops. Moving horizon approach is used to estimate time varying fault parameters in a detailed nonlinear simulation model of an unmanned aerial vehicle. An excellent performance is demonstrated in the presence of winds and turbulence.

  12. Algebraic multigrid preconditioners for two-phase flow in porous media with phase transitions

    NASA Astrophysics Data System (ADS)

    Bui, Quan M.; Wang, Lu; Osei-Kuffuor, Daniel

    2018-04-01

    Multiphase flow is a critical process in a wide range of applications, including oil and gas recovery, carbon sequestration, and contaminant remediation. Numerical simulation of multiphase flow requires solving of a large, sparse linear system resulting from the discretization of the partial differential equations modeling the flow. In the case of multiphase multicomponent flow with miscible effect, this is a very challenging task. The problem becomes even more difficult if phase transitions are taken into account. A new approach to handle phase transitions is to formulate the system as a nonlinear complementarity problem (NCP). Unlike in the primary variable switching technique, the set of primary variables in this approach is fixed even when there is phase transition. Not only does this improve the robustness of the nonlinear solver, it opens up the possibility to use multigrid methods to solve the resulting linear system. The disadvantage of the complementarity approach, however, is that when a phase disappears, the linear system has the structure of a saddle point problem and becomes indefinite, and current algebraic multigrid (AMG) algorithms cannot be applied directly. In this study, we explore the effectiveness of a new multilevel strategy, based on the multigrid reduction technique, to deal with problems of this type. We demonstrate the effectiveness of the method through numerical results for the case of two-phase, two-component flow with phase appearance/disappearance. We also show that the strategy is efficient and scales optimally with problem size.

  13. Spectrum efficient distance-adaptive paths for fixed and fixed-alternate routing in elastic optical networks

    NASA Astrophysics Data System (ADS)

    Agrawal, Anuj; Bhatia, Vimal; Prakash, Shashi

    2018-01-01

    Efficient utilization of spectrum is a key concern in the soon to be deployed elastic optical networks (EONs). To perform routing in EONs, various fixed routing (FR), and fixed-alternate routing (FAR) schemes are ubiquitously used. FR, and FAR schemes calculate a fixed route, and a prioritized list of a number of alternate routes, respectively, between different pairs of origin o and target t nodes in the network. The route calculation performed using FR and FAR schemes is predominantly based on either the physical distance, known as k -shortest paths (KSP), or on the hop count (HC). For survivable optical networks, FAR usually calculates link-disjoint (LD) paths. These conventional routing schemes have been efficiently used for decades in communication networks. However, in this paper, it has been demonstrated that these commonly used routing schemes cannot utilize the network spectral resources optimally in the newly introduced EONs. Thus, we propose a new routing scheme for EON, namely, k -distance adaptive paths (KDAP) that efficiently utilizes the benefit of distance-adaptive modulation, and bit rate-adaptive superchannel capability inherited by EON to improve spectrum utilization. In the proposed KDAP, routes are found and prioritized on the basis of bit rate, distance, spectrum granularity, and the number of links used for a particular route. To evaluate the performance of KSP, HC, LD, and the proposed KDAP, simulations have been performed for three different sized networks, namely, 7-node test network (TEST7), NSFNET, and 24-node US backbone network (UBN24). We comprehensively assess the performance of various conventional, and the proposed routing schemes by solving both the RSA and the dual RSA problems under homogeneous and heterogeneous traffic requirements. Simulation results demonstrate that there is a variation amongst the performance of KSP, HC, and LD, depending on the o - t pair, and the network topology and its connectivity. However, the proposed KDAP always performs better for all the considered networks and traffic scenarios, as compared to the conventional routing schemes, namely, KSP, HC, and LD. The proposed KDAP achieves up to 60 % , and 10.46 % improvement in terms of spectrum utilization, and resource utilization ratio, respectively, over the conventional routing schemes.

  14. Behaviorism: part of the problem or part of the solution.

    PubMed Central

    Holland, J G

    1978-01-01

    The form frequently taken by behavior-modification programs is analyzed in terms of the parent science, Behaviorism. Whereas Behaviorism assumes that behavior is the result of contingencies, and that lasting behavior change involves changing the contingencies that give rise to and support the behavior, most behavior-modification programs merely arrange special contingencies in a special environment to eliminate the "problem" behavior. Even when the problem behavior is as widespread as alcoholism and crime, behavior modifiers focus on "fixing" the alcoholic and the criminal, not on changing the societal contingencies that prevail outside the therapeutic environment and continue to produce alcoholics and criminals. The contingencies that shape this method of dealing with behavioral problems are also analyzed, and this analysis leads to a criticism of the current social structure as a behavior control system. Although applied behaviorists have frequently focused on fixing individuals, the science of Behaviorism provides the means to analyze the structures, the system, and the forms of societal control that produce the "problems". PMID:649524

  15. An approximation algorithm for the Noah's Ark problem with random feature loss.

    PubMed

    Hickey, Glenn; Blanchette, Mathieu; Carmi, Paz; Maheshwari, Anil; Zeh, Norbert

    2011-01-01

    The phylogenetic diversity (PD) of a set of species is a measure of their evolutionary distinctness based on a phylogenetic tree. PD is increasingly being adopted as an index of biodiversity in ecological conservation projects. The Noah's Ark Problem (NAP) is an NP-Hard optimization problem that abstracts a fundamental conservation challenge in asking to maximize the expected PD of a set of taxa given a fixed budget, where each taxon is associated with a cost of conservation and a probability of extinction. Only simplified instances of the problem, where one or more parameters are fixed as constants, have as of yet been addressed in the literature. Furthermore, it has been argued that PD is not an appropriate metric for models that allow information to be lost along paths in the tree. We therefore generalize the NAP to incorporate a proposed model of feature loss according to an exponential distribution and term this problem NAP with Loss (NAPL). In this paper, we present a pseudopolynomial time approximation scheme for NAPL.

  16. Variational algorithms for nonlinear smoothing applications

    NASA Technical Reports Server (NTRS)

    Bach, R. E., Jr.

    1977-01-01

    A variational approach is presented for solving a nonlinear, fixed-interval smoothing problem with application to offline processing of noisy data for trajectory reconstruction and parameter estimation. The nonlinear problem is solved as a sequence of linear two-point boundary value problems. Second-order convergence properties are demonstrated. Algorithms for both continuous and discrete versions of the problem are given, and example solutions are provided.

  17. Simulations of solid-fluid coupling with application to crystal entrainment in vigorous convection

    NASA Astrophysics Data System (ADS)

    Suckale, J.; Elkins-Tanton, L. T.; Sethian, J.; Yu, J.

    2009-12-01

    Many problems in computational geophysics require the accurate coupling of a solid body to viscous flow. Examples range from understanding the role of highly crystalline magma for the dynamic of volcanic eruptions to crystal entrainment in magmatic flow and the emplacement of xenoliths. In this paper, we present and validate a numerical method for solid-fluid coupling. The algorithm relies on a two-step projection scheme: In the first step, we solve the multiple-phase Navier-Stokes or Stokes equation in both domains. In the second step, we project the velocity field in the solid domain onto a rigid-body motion by enforcing that the deformation tensor in the respective domain is zero. This procedure is also used to enforce the no-slip boundary condition on the solid-fluid interface. We perform several benchmark computations to validate our computations. More precisely, we investigate the formation of a wake behind both fixed and mobile cylinders and cuboids with and without imposed velocity fields in the fluid. These preliminary tests indicate that our code is able to simulate solid-fluid coupling for Reynolds numbers of up to 1000. Finally, we apply our method to the problem of crystal entrainment in vigorous convection. The interplay between sedimentation and re-entrainment of crystals in convective flow is of fundamental importance for understanding the compositional evolution of magmatic reservoirs of various sizes from small lava ponds to magma oceans at the planetary scale. Previous studies of this problem have focused primarily on laboratory experiments, often with conflicting conclusions. Our work is complementary to these prior studies as we model the competing processes of gravitational sedimentation and entrainment of crystals at the length scale of the size of the crystals.

  18. The mass media destabilizes the cultural homogenous regime in Axelrod's model

    NASA Astrophysics Data System (ADS)

    Peres, Lucas R.; Fontanari, José F.

    2010-02-01

    An important feature of Axelrod's model for culture dissemination or social influence is the emergence of many multicultural absorbing states, despite the fact that the local rules that specify the agents interactions are explicitly designed to decrease the cultural differences between agents. Here we re-examine the problem of introducing an external, global interaction—the mass media—in the rules of Axelrod's model: in addition to their nearest neighbors, each agent has a certain probability p to interact with a virtual neighbor whose cultural features are fixed from the outset. Most surprisingly, this apparently homogenizing effect actually increases the cultural diversity of the population. We show that, contrary to previous claims in the literature, even a vanishingly small value of p is sufficient to destabilize the homogeneous regime for very large lattice sizes.

  19. Recent Enhancements To The FUN3D Flow Solver For Moving-Mesh Applications

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T,; Thomas, James L.

    2009-01-01

    An unsteady Reynolds-averaged Navier-Stokes solver for unstructured grids has been extended to handle general mesh movement involving rigid, deforming, and overset meshes. Mesh deformation is achieved through analogy to elastic media by solving the linear elasticity equations. A general method for specifying the motion of moving bodies within the mesh has been implemented that allows for inherited motion through parent-child relationships, enabling simulations involving multiple moving bodies. Several example calculations are shown to illustrate the range of potential applications. For problems in which an isolated body is rotating with a fixed rate, a noninertial reference-frame formulation is available. An example calculation for a tilt-wing rotor is used to demonstrate that the time-dependent moving grid and noninertial formulations produce the same results in the limit of zero time-step size.

  20. A framework for inference about carnivore density from unstructured spatial sampling of scat using detector dogs

    USGS Publications Warehouse

    Thompson, Craig M.; Royle, J. Andrew; Garner, James D.

    2012-01-01

    Wildlife management often hinges upon an accurate assessment of population density. Although undeniably useful, many of the traditional approaches to density estimation such as visual counts, livetrapping, or mark–recapture suffer from a suite of methodological and analytical weaknesses. Rare, secretive, or highly mobile species exacerbate these problems through the reality of small sample sizes and movement on and off study sites. In response to these difficulties, there is growing interest in the use of non-invasive survey techniques, which provide the opportunity to collect larger samples with minimal increases in effort, as well as the application of analytical frameworks that are not reliant on large sample size arguments. One promising survey technique, the use of scat detecting dogs, offers a greatly enhanced probability of detection while at the same time generating new difficulties with respect to non-standard survey routes, variable search intensity, and the lack of a fixed survey point for characterizing non-detection. In order to account for these issues, we modified an existing spatially explicit, capture–recapture model for camera trap data to account for variable search intensity and the lack of fixed, georeferenced trap locations. We applied this modified model to a fisher (Martes pennanti) dataset from the Sierra National Forest, California, and compared the results (12.3 fishers/100 km2) to more traditional density estimates. We then evaluated model performance using simulations at 3 levels of population density. Simulation results indicated that estimates based on the posterior mode were relatively unbiased. We believe that this approach provides a flexible analytical framework for reconciling the inconsistencies between detector dog survey data and density estimation procedures.

  1. Evaluation of Fixed Momentary DRO Schedules under Signaled and Unsignaled Arrangements

    ERIC Educational Resources Information Center

    Hammond, Jennifer L.; Iwata, Brian A.; Fritz, Jennifer N.; Dempsey, Carrie M.

    2011-01-01

    Fixed momentary schedules of differential reinforcement of other behavior (FM DRO) generally have been ineffective as treatment for problem behavior. Because most early research on FM DRO included presentation of a signal at the end of the DRO interval, it is unclear whether the limited effects of FM DRO were due to (a) the momentary response…

  2. Effects of Fixed-Time Reinforcement Delivered by Teachers for Reducing Problem Behavior in Special Education Classrooms

    ERIC Educational Resources Information Center

    Tomlin, Michelle; Reed, Phil

    2012-01-01

    The effects of fixed-time (FT) reinforcement schedules on the disruptive behavior of 4 students in special education classrooms were studied. Attention provided on FT schedules in the context of a multiple-baseline design across participants substantially decreased all students' challenging behavior. Disruptive behavior was maintained at levels…

  3. An Efficient MCMC Algorithm to Sample Binary Matrices with Fixed Marginals

    ERIC Educational Resources Information Center

    Verhelst, Norman D.

    2008-01-01

    Uniform sampling of binary matrices with fixed margins is known as a difficult problem. Two classes of algorithms to sample from a distribution not too different from the uniform are studied in the literature: importance sampling and Markov chain Monte Carlo (MCMC). Existing MCMC algorithms converge slowly, require a long burn-in period and yield…

  4. Fixed-Tuition Pricing: A Solution that May Be Worse than the Problem

    ERIC Educational Resources Information Center

    Morphew, Christopher C.

    2007-01-01

    Fixed-tuition plans, which vary in specifics from institution to institution, rely on a common principle: Students pay the same annual tuition costs over a pre-determined length of time, ostensibly the time required to earn an undergraduate degree. Students, parents, and policymakers are demonstrating growing interest in such plans. At face value,…

  5. Perceived beauty of random texture patterns: A preference for complexity.

    PubMed

    Friedenberg, Jay; Liby, Bruce

    2016-07-01

    We report two experiments on the perceived aesthetic quality of random density texture patterns. In each experiment a square grid was filled with a progressively larger number of elements. Grid size in Experiment 1 was 10×10 with elements added to create a variety of textures ranging from 10%-100% fill levels. Participants rated the beauty of the patterns. Average judgments across all observers showed an inverted U-shaped function that peaked near middle densities. In Experiment 2 grid size was increased to 15×15 to see if observers preferred patterns with a fixed density or a fixed number of elements. The results of the second experiment were nearly identical to that of the first showing a preference for density over fixed element number. Ratings in both studies correlated positively with a GIF compression metric of complexity and with edge length. Within the range of stimuli used, observers judge more complex patterns to be more beautiful. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Renormalization-group theory for finite-size scaling in extreme statistics

    NASA Astrophysics Data System (ADS)

    Györgyi, G.; Moloney, N. R.; Ozogány, K.; Rácz, Z.; Droz, M.

    2010-04-01

    We present a renormalization-group (RG) approach to explain universal features of extreme statistics applied here to independent identically distributed variables. The outlines of the theory have been described in a previous paper, the main result being that finite-size shape corrections to the limit distribution can be obtained from a linearization of the RG transformation near a fixed point, leading to the computation of stable perturbations as eigenfunctions. Here we show details of the RG theory which exhibit remarkable similarities to the RG known in statistical physics. Besides the fixed points explaining universality, and the least stable eigendirections accounting for convergence rates and shape corrections, the similarities include marginally stable perturbations which turn out to be generic for the Fisher-Tippett-Gumbel class. Distribution functions containing unstable perturbations are also considered. We find that, after a transitory divergence, they return to the universal fixed line at the same or at a different point depending on the type of perturbation.

  7. Energy at Stony Brook.

    ERIC Educational Resources Information Center

    Visich, Marian, Jr.

    1984-01-01

    Discusses strategies used in a course for nonengineering students which consists of case studies of such sociotechnological problems as automobile safety, water pollution, and energy. Solutions to the problems are classified according to three approaches: education, government regulation, and technological fix. (BC)

  8. The effects of geography on domestic fixed and broadcasting satellite systems in ITU Region 2

    NASA Technical Reports Server (NTRS)

    Sawitz, P. H.

    1980-01-01

    The paper discusses the effects of geography on service arcs and on the various techniques used to achieve frequency reuse and applies the results to the domestic fixed and broadcasting satellite systems of International Telecommunication Union (ITU) Region 2. The effects of an arc latitude, size, and shape are considered. Earth-station and satellite antenna discrimination is outlined.

  9. Higher-Order Thinking Development through Adaptive Problem-Based Learning

    ERIC Educational Resources Information Center

    Raiyn, Jamal; Tilchin, Oleg

    2015-01-01

    In this paper we propose an approach to organizing Adaptive Problem-Based Learning (PBL) leading to the development of Higher-Order Thinking (HOT) skills and collaborative skills in students. Adaptability of PBL is expressed by changes in fixed instructor assessments caused by the dynamics of developing HOT skills needed for problem solving,…

  10. Optimization in First Semester Calculus: A Look at a Classic Problem

    ERIC Educational Resources Information Center

    LaRue, Renee; Infante, Nicole Engelke

    2015-01-01

    Optimization problems in first semester calculus have historically been a challenge for students. Focusing on the classic optimization problem of finding the minimum amount of fencing required to enclose a fixed area, we examine students' activity through the lens of Tall and Vinner's concept image and Carlson and Bloom's multidimensional…

  11. Neither fixed nor random: weighted least squares meta-analysis.

    PubMed

    Stanley, T D; Doucouliagos, Hristos

    2015-06-15

    This study challenges two core conventional meta-analysis methods: fixed effect and random effects. We show how and explain why an unrestricted weighted least squares estimator is superior to conventional random-effects meta-analysis when there is publication (or small-sample) bias and better than a fixed-effect weighted average if there is heterogeneity. Statistical theory and simulations of effect sizes, log odds ratios and regression coefficients demonstrate that this unrestricted weighted least squares estimator provides satisfactory estimates and confidence intervals that are comparable to random effects when there is no publication (or small-sample) bias and identical to fixed-effect meta-analysis when there is no heterogeneity. When there is publication selection bias, the unrestricted weighted least squares approach dominates random effects; when there is excess heterogeneity, it is clearly superior to fixed-effect meta-analysis. In practical applications, an unrestricted weighted least squares weighted average will often provide superior estimates to both conventional fixed and random effects. Copyright © 2015 John Wiley & Sons, Ltd.

  12. A pressurized cylindrical shell with a fixed end which contains an axial part-through or through crack

    NASA Technical Reports Server (NTRS)

    Yahsi, O. S.; Erdogan, F.

    1983-01-01

    A cylindrical shell having a very stiff and plate or a flange is considered. It is assumed that near the end the cylinder contains an axial flaw which may be modeled as a part through surface crack or a through crack. The effect of the end constraining on the stress intensity factor which is the main fracture mechanics parameter is studied. The applied loads acting on the cylinder are assumed to be axisymmetric. Thus the crack problem under consideration is symmetric with respect to the plane of the crack and consequently only the Mode 1 stress intensity factors are nonzero. With this limitation, the general perturbation problem for a cylinder with a built in end containing an axial crack is considered. Reissner's shell theory is used to formulate the problem. The part through crack problem is treated by using a line spring model. In the case of a crack tip terminating at the fixed end it is shown that the integral equations of the shell problem has the same generalized Cauchy kernel as the corresponding plane stress elasticity problem.

  13. Mixed Integer Programming Model and Incremental Optimization for Delivery and Storage Planning Using Truck Terminals

    NASA Astrophysics Data System (ADS)

    Sakakibara, Kazutoshi; Tian, Yajie; Nishikawa, Ikuko

    We discuss the planning of transportation by trucks over a multi-day period. Each truck collects loads from suppliers and delivers them to assembly plants or a truck terminal. By exploiting the truck terminal as a temporal storage, we aim to increase the load ratio of each truck and to minimize the lead time for transportation. In this paper, we show a mixed integer programming model which represents each product explicitly, and discuss the decomposition of the problem into a problem of delivery and storage, and a problem of vehicle routing. Based on this model, we propose a relax-and-fix type heuristic in which decision variables are fixed one by one by mathematical programming techniques such as branch-and-bound methods.

  14. Diameter-Constrained Steiner Tree

    NASA Astrophysics Data System (ADS)

    Ding, Wei; Lin, Guohui; Xue, Guoliang

    Given an edge-weighted undirected graph G = (V,E,c,w), where each edge e ∈ E has a cost c(e) and a weight w(e), a set S ⊆ V of terminals and a positive constant D 0, we seek a minimum cost Steiner tree where all terminals appear as leaves and its diameter is bounded by D 0. Note that the diameter of a tree represents the maximum weight of path connecting two different leaves in the tree. Such problem is called the minimum cost diameter-constrained Steiner tree problem. This problem is NP-hard even when the topology of Steiner tree is fixed. In present paper we focus on this restricted version and present a fully polynomial time approximation scheme (FPTAS) for computing a minimum cost diameter-constrained Steiner tree under a fixed topology.

  15. Night shift and rotating shift in association with sleep problems, burnout and minor mental disorder in male and female employees.

    PubMed

    Cheng, Wan-Ju; Cheng, Yawen

    2017-07-01

    Shift work is associated with adverse physical and psychological health outcomes. However, the independent health effects of night work and rotating shift on workers' sleep and mental health risks and the potential gender differences have not been fully evaluated. We used data from a nationwide survey of representative employees of Taiwan in 2013, consisting of 16 440 employees. Participants reported their work shift patterns 1 week prior to the survey, which were classified into the four following shift types: fixed day, rotating day, fixed night and rotating night shifts. Also obtained were self-reported sleep duration, presence of insomnia, burnout and mental disorder assessed by the Brief Symptom Rating Scale. Among all shift types, workers with fixed night shifts were found to have the shortest duration of sleep, highest level of burnout score, and highest prevalence of insomnia and minor mental disorders. Gender-stratified regression analyses with adjustment of age, education and psychosocial work conditions showed that both in male and female workers, fixed night shifts were associated with greater risks for short sleep duration (<7 hours per day) and insomnia. In female workers, fixed night shifts were also associated with increased risks for burnout and mental disorders, but after adjusting for insomnia, the associations between fixed night shifts and poor mental health were no longer significant. The findings of this study suggested that a fixed night shift was associated with greater risks for sleep and mental health problems, and the associations might be mediated by sleep disturbance. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  16. Well-fixed acetabular component retention or replacement: the whys and the wherefores.

    PubMed

    Blaha, J David

    2002-06-01

    Occasionally the adult reconstructive surgeon is faced with a well-fixed acetabular component that is associated with an arthroplasty problem that ordinarily would require removal and replacement of the cup. Removal of a well-fixed cup is associated with considerable morbidity in bone loss, particularly in the medial wall of the acetabulum. In such a situation, retention of the cup with exchange only of the polyethylene liner may be possible. As preparation for a prospective study, I informally reviewed my experience of cup retention or replacement in revision total hip arthroplasty. An algorithm for retaining or revising a well-fixed acetabular component is presented here. Copyright 2002, Elsevier Science (USA).

  17. Scaling behavior of knotted random polygons and self-avoiding polygons: Topological swelling with enhanced exponent.

    PubMed

    Uehara, Erica; Deguchi, Tetsuo

    2017-12-07

    We show that the average size of self-avoiding polygons (SAPs) with a fixed knot is much larger than that of no topological constraint if the excluded volume is small and the number of segments is large. We call it topological swelling. We argue an "enhancement" of the scaling exponent for random polygons with a fixed knot. We study them systematically through SAP consisting of hard cylindrical segments with various different values of the radius of segments. Here we mean by the average size the mean-square radius of gyration. Furthermore, we show numerically that the topological balance length of a composite knot is given by the sum of those of all constituent prime knots. Here we define the topological balance length of a knot by such a number of segments that topological entropic repulsions are balanced with the knot complexity in the average size. The additivity suggests the local knot picture.

  18. Analysis of bulk arrival queueing system with batch size dependent service and working vacation

    NASA Astrophysics Data System (ADS)

    Niranjan, S. P.; Indhira, K.; Chandrasekaran, V. M.

    2018-04-01

    This paper concentrates on single server bulk arrival queue system with batch size dependent service and working vacation. The server provides service in two service modes depending upon the queue length. The server provides single service if the queue length is at least `a'. On the other hand the server provides fixed batch service if the queue length is at least `k' (k > a). Batch service is provided with some fixed batch size `k'. After completion of service if the queue length is less than `a' then the server leaves for working vacation. During working vacation customers are served with lower service rate than the regular service rate. Service during working vacation also contains two service modes. For the proposed model probability generating function of the queue length at an arbitrary time will be obtained by using supplementary variable technique. Some performance measures will also be presented with suitable numerical illustrations.

  19. Scaling behavior of knotted random polygons and self-avoiding polygons: Topological swelling with enhanced exponent

    NASA Astrophysics Data System (ADS)

    Uehara, Erica; Deguchi, Tetsuo

    2017-12-01

    We show that the average size of self-avoiding polygons (SAPs) with a fixed knot is much larger than that of no topological constraint if the excluded volume is small and the number of segments is large. We call it topological swelling. We argue an "enhancement" of the scaling exponent for random polygons with a fixed knot. We study them systematically through SAP consisting of hard cylindrical segments with various different values of the radius of segments. Here we mean by the average size the mean-square radius of gyration. Furthermore, we show numerically that the topological balance length of a composite knot is given by the sum of those of all constituent prime knots. Here we define the topological balance length of a knot by such a number of segments that topological entropic repulsions are balanced with the knot complexity in the average size. The additivity suggests the local knot picture.

  20. Gauge fixing and BFV quantization

    NASA Astrophysics Data System (ADS)

    Rogers, Alice

    2000-01-01

    Non-singularity conditions are established for the Batalin-Fradkin-Vilkovisky (BFV) gauge-fixing fermion which are sufficient for it to lead to the correct path integral for a theory with constraints canonically quantized in the BFV approach. The conditions ensure that the anticommutator of this fermion with the BRST charge regularizes the path integral by regularizing the trace over non-physical states in each ghost sector. The results are applied to the quantization of a system which has a Gribov problem, using a non-standard form of the gauge-fixing fermion.

  1. Analysis of Phoenix Anomalies and IV and V Findings Applied to the GRAIL Mission

    NASA Technical Reports Server (NTRS)

    Larson, Steve

    2012-01-01

    Analysis of patterns in IV&V findings and their correlation with post-launch anomalies allowed GRAIL to make more efficient use of IV&V services . Fewer issues. . Higher fix rate. . Better communication. . Increased volume of potential issues vetted, at lower cost. . Hard to make predictions of post-launch performance based on IV&V findings . Phoenix made sound fix/use as-is decisions . Things that were fixed eliminated some problems, but hard to quantify. . Broad predictive success in one area, but inverse relationship in others.

  2. On the size of sports fields

    NASA Astrophysics Data System (ADS)

    Darbois Texier, Baptiste; Cohen, Caroline; Dupeux, Guillaume; Quéré, David; Clanet, Christophe

    2014-03-01

    The size of sports fields considerably varies from a few meters for table tennis to hundreds of meters for golf. We first show that this size is mainly fixed by the range of the projectile, that is, by the aerodynamic properties of the ball (mass, surface, drag coefficient) and its maximal velocity in the game. This allows us to propose general classifications for sports played with a ball.

  3. Effects of Class Size on Alternative Educational Outcomes across Disciplines

    ERIC Educational Resources Information Center

    Cheng, Dorothy A.

    2011-01-01

    This is the first study to use self-reported ratings of student learning, instructor recommendations, and course recommendations as the outcome measure to estimate class size effects, doing so across 24 disciplines. Fixed-effects models controlling for heterogeneous courses and instructors reveal that increasing enrollment has negative and…

  4. A Bayesian Nonparametric Meta-Analysis Model

    ERIC Educational Resources Information Center

    Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G.

    2015-01-01

    In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall…

  5. Preliminary CFD study of Pebble Size and its Effect on Heat Transfer in a Pebble Bed Reactor

    NASA Astrophysics Data System (ADS)

    Jones, Andrew; Enriquez, Christian; Spangler, Julian; Yee, Tein; Park, Jungkyu; Farfan, Eduardo

    2017-11-01

    In pebble bed reactors, the typical pebble diameter used is 6cm, and within each pebble is are thousands of nuclear fuel kernels. However, efficiency of the reactor does not solely depend on the number of kernels of fuel within each graphite sphere, but also depends on the type and motion of the coolant within the voids between the spheres and the reactor itself. In this work a physical analysis of the pebble bed nuclear reactor's fluid dynamics is undertaken using Computational Fluid Dynamics software. The primary goal of this work is to observe the relationship between the different pebble diameters in an idealized alignment and the thermal transport efficiency of the reactor. The model constructed of our idealized argument will consist on stacked 8 pebble columns that fixed at the inlet on the reactor. Two different pebble sizes 4 cm and 6 cm will be studied and helium will be supplied as coolant with a fixed flow rate of 96 kg/s, also a fixed pebble surface temperatures will be used. Comparison will then be made to evaluate the efficiency of coolant to transport heat due to the varying sizes of the pebbles. Assistant Professor for the Department of Civil and Construction Engineering PhD.

  6. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Jaehyung; Wagner, Lucas K.; Ertekin, Elif, E-mail: ertekin@illinois.edu

    2015-12-14

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlledmore » and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.« less

  7. A Novel Coarsening Method for Scalable and Efficient Mesh Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, A; Hysom, D; Gunney, B

    2010-12-02

    In this paper, we propose a novel mesh coarsening method called brick coarsening method. The proposed method can be used in conjunction with any graph partitioners and scales to very large meshes. This method reduces problem space by decomposing the original mesh into fixed-size blocks of nodes called bricks, layered in a similar way to conventional brick laying, and then assigning each node of the original mesh to appropriate brick. Our experiments indicate that the proposed method scales to very large meshes while allowing simple RCB partitioner to produce higher-quality partitions with significantly less edge cuts. Our results further indicatemore » that the proposed brick-coarsening method allows more complicated partitioners like PT-Scotch to scale to very large problem size while still maintaining good partitioning performance with relatively good edge-cut metric. Graph partitioning is an important problem that has many scientific and engineering applications in such areas as VLSI design, scientific computing, and resource management. Given a graph G = (V,E), where V is the set of vertices and E is the set of edges, (k-way) graph partitioning problem is to partition the vertices of the graph (V) into k disjoint groups such that each group contains roughly equal number of vertices and the number of edges connecting vertices in different groups is minimized. Graph partitioning plays a key role in large scientific computing, especially in mesh-based computations, as it is used as a tool to minimize the volume of communication and to ensure well-balanced load across computing nodes. The impact of graph partitioning on the reduction of communication can be easily seen, for example, in different iterative methods to solve a sparse system of linear equation. Here, a graph partitioning technique is applied to the matrix, which is basically a graph in which each edge is a non-zero entry in the matrix, to allocate groups of vertices to processors in such a way that many of matrix-vector multiplication can be performed locally on each processor and hence to minimize communication. Furthermore, a good graph partitioning scheme ensures the equal amount of computation performed on each processor. Graph partitioning is a well known NP-complete problem, and thus the most commonly used graph partitioning algorithms employ some forms of heuristics. These algorithms vary in terms of their complexity, partition generation time, and the quality of partitions, and they tend to trade off these factors. A significant challenge we are currently facing at the Lawrence Livermore National Laboratory is how to partition very large meshes on massive-size distributed memory machines like IBM BlueGene/P, where scalability becomes a big issue. For example, we have found that the ParMetis, a very popular graph partitioning tool, can only scale to 16K processors. An ideal graph partitioning method on such an environment should be fast and scale to very large meshes, while producing high quality partitions. This is an extremely challenging task, as to scale to that level, the partitioning algorithm should be simple and be able to produce partitions that minimize inter-processor communications and balance the load imposed on the processors. Our goals in this work are two-fold: (1) To develop a new scalable graph partitioning method with good load balancing and communication reduction capability. (2) To study the performance of the proposed partitioning method on very large parallel machines using actual data sets and compare the performance to that of existing methods. The proposed method achieves the desired scalability by reducing the mesh size. For this, it coarsens an input mesh into a smaller size mesh by coalescing the vertices and edges of the original mesh into a set of mega-vertices and mega-edges. A new coarsening method called brick algorithm is developed in this research. In the brick algorithm, the zones in a given mesh are first grouped into fixed size blocks called bricks. These brick are then laid in a way similar to conventional brick laying technique, which reduces the number of neighboring blocks each block needs to communicate. Contributions of this research are as follows: (1) We have developed a novel method that scales to a really large problem size while producing high quality mesh partitions; (2) We measured the performance and scalability of the proposed method on a machine of massive size using a set of actual large complex data sets, where we have scaled to a mesh with 110 million zones using our method. To the best of our knowledge, this is the largest complex mesh that a partitioning method is successfully applied to; and (3) We have shown that proposed method can reduce the number of edge cuts by as much as 65%.« less

  8. An Evaluation of the Gap Sizes of 3-Unit Fixed Dental Prostheses Milled from Sintering Metal Blocks.

    PubMed

    Jung, Jae-Kwan

    2017-01-01

    This study assessed the clinical acceptability of sintering metal-fabricated 3-unit fixed dental prostheses (FDPs) based on gap sizes. Ten specimens were prepared on research models by milling sintering metal blocks or by the lost-wax technique (LWC group). Gap sizes were assessed at 12 points per abutment (premolar and molar), 24 points per specimen (480 points in a total in 20 specimens). The measured points were categorized as marginal, axial wall, and occlusal for assessment in a silicone replica. The silicone replica was cut through the mesiodistal and buccolingual center. The four sections were magnified at 160x, and the thickness of the light body silicone was measured to determine the gap size, and gap size means were compared. For the premolar part, the mean (standard deviation) gap size was nonsignificantly ( p = 0.139) smaller in the SMB group (68.6 ± 35.6  μ m) than in the LWC group (69.6 ± 16.9  μ m). The mean molar gap was nonsignificantly smaller ( p = 0.852) in the LWC (73.9 ± 25.6  μ m) than in the SMB (78.1 ± 37.4  μ m) group. The gap sizes were similar between the two groups. Because the gap sizes were within the previously proposed clinically accepted limit, FDPs prepared by sintered metal block milling are clinically acceptable.

  9. Robust organelle size extractions from elastic scattering measurements of single cells (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Cannaday, Ashley E.; Draham, Robert; Berger, Andrew J.

    2016-04-01

    The goal of this project is to estimate non-nuclear organelle size distributions in single cells by measuring angular scattering patterns and fitting them with Mie theory. Simulations have indicated that the large relative size distribution of organelles (mean:width≈2) leads to unstable Mie fits unless scattering is collected at polar angles less than 20 degrees. Our optical system has therefore been modified to collect angles down to 10 degrees. Initial validations will be performed on polystyrene bead populations whose size distributions resemble those of cell organelles. Unlike with the narrow bead distributions that are often used for calibration, we expect to see an order-of-magnitude improvement in the stability of the size estimates as the minimum angle decreases from 20 to 10 degrees. Scattering patterns will then be acquired and analyzed from single cells (EMT6 mouse cancer cells), both fixed and live, at multiple time points. Fixed cells, with no changes in organelle sizes over time, will be measured to determine the fluctuation level in estimated size distribution due to measurement imperfections alone. Subsequent measurements on live cells will determine whether there is a higher level of fluctuation that could be attributed to dynamic changes in organelle size. Studies on unperturbed cells are precursors to ones in which the effects of exogenous agents are monitored over time.

  10. An Evaluation of the Gap Sizes of 3-Unit Fixed Dental Prostheses Milled from Sintering Metal Blocks

    PubMed Central

    2017-01-01

    This study assessed the clinical acceptability of sintering metal-fabricated 3-unit fixed dental prostheses (FDPs) based on gap sizes. Ten specimens were prepared on research models by milling sintering metal blocks or by the lost-wax technique (LWC group). Gap sizes were assessed at 12 points per abutment (premolar and molar), 24 points per specimen (480 points in a total in 20 specimens). The measured points were categorized as marginal, axial wall, and occlusal for assessment in a silicone replica. The silicone replica was cut through the mesiodistal and buccolingual center. The four sections were magnified at 160x, and the thickness of the light body silicone was measured to determine the gap size, and gap size means were compared. For the premolar part, the mean (standard deviation) gap size was nonsignificantly (p = 0.139) smaller in the SMB group (68.6 ± 35.6 μm) than in the LWC group (69.6 ± 16.9 μm). The mean molar gap was nonsignificantly smaller (p = 0.852) in the LWC (73.9 ± 25.6 μm) than in the SMB (78.1 ± 37.4 μm) group. The gap sizes were similar between the two groups. Because the gap sizes were within the previously proposed clinically accepted limit, FDPs prepared by sintered metal block milling are clinically acceptable. PMID:28246605

  11. Not Just Hats Anymore: Binomial Inversion and the Problem of Multiple Coincidences

    ERIC Educational Resources Information Center

    Hathout, Leith

    2007-01-01

    The well-known "hats" problem, in which a number of people enter a restaurant and check their hats, and then receive them back at random, is often used to illustrate the concept of derangements, that is, permutations with no fixed points. In this paper, the problem is extended to multiple items of clothing, and a general solution to the problem of…

  12. Determination of the lowest concentrations of aldehyde fixatives for completely fixing various cellular structures by real-time imaging and quantification.

    PubMed

    Zeng, Fangfa; Yang, Wen; Huang, Jie; Chen, Yuan; Chen, Yong

    2013-05-01

    The effectiveness of fixatives for fixing biological specimens has long been widely investigated. However, the lowest concentrations of fixatives needed to completely fix whole cells or various cellular structures remain unclear. Using real-time imaging and quantification, we determined the lowest concentrations of glutaraldehyde (0.001-0.005, ~0.005, 0.01-005, 0.01-005, and 0.01-0.1 %) and formaldehyde/paraformaldehyde (0.01-0.05, ~0.05, 0.5-1, 1-1.5, and 0.5-1 %) required to completely fix focal adhesions, cell-surface particles, stress fibers, the cell cortex, and the inner structures of human umbilical vein endothelial cells within 20 min. With prolonged fixation times (>20 min), the concentration of fixative required to completely fix these structures will shift to even lower values. These data may help us understand and optimize fixation protocols and understand the potential effects of the small quantities of endogenously generated aldehydes in human cells. We also determined the lowest concentration of glutaraldehyde (0.5 %) and formaldehyde/paraformaldehyde (2 %) required to induce cell blebbing. We found that the average number and size of the fixation-induced blebs per cell were dependent on both fixative concentration and cell spread area, but were independent of temperature. These data provide important information for understanding cell blebbing, and may help optimize the vesiculation-based technique used to isolate plasma membrane by suggesting ways of controlling the number or size of fixation-induced cell blebs.

  13. Some estimation formulae for continuous time-invariant linear systems

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Sidhu, G. S.

    1975-01-01

    In this brief paper we examine a Riccati equation decomposition due to Reid and Lainiotis and apply the result to the continuous time-invariant linear filtering problem. Exploitation of the time-invariant structure leads to integration-free covariance recursions which are of use in covariance analyses and in filter implementations. A super-linearly convergent iterative solution to the algebraic Riccati equation (ARE) is developed. The resulting algorithm, arranged in a square-root form, is thought to be numerically stable and competitive with other ARE solution methods. Certain covariance relations that are relevant to the fixed-point and fixed-lag smoothing problems are also discussed.

  14. Optimal fixed-finite-dimensional compensator for Burgers' equation with unbounded input/output operators

    NASA Technical Reports Server (NTRS)

    Burns, John A.; Marrekchi, Hamadi

    1993-01-01

    The problem of using reduced order dynamic compensators to control a class of nonlinear parabolic distributed parameter systems was considered. Concentration was on a system with unbounded input and output operators governed by Burgers' equation. A linearized model was used to compute low-order-finite-dimensional control laws by minimizing certain energy functionals. Then these laws were applied to the nonlinear model. Standard approaches to this problem employ model/controller reduction techniques in conjunction with linear quadratic Gaussian (LQG) theory. The approach used is based on the finite dimensional Bernstein/Hyland optimal projection theory which yields a fixed-finite-order controller.

  15. Modelling low Reynolds number vortex-induced vibration problems with a fixed mesh fluid-solid interaction formulation

    NASA Astrophysics Data System (ADS)

    González Cornejo, Felipe A.; Cruchaga, Marcela A.; Celentano, Diego J.

    2017-11-01

    The present work reports a fluid-rigid solid interaction formulation described within the framework of a fixed-mesh technique. The numerical analysis is focussed on the study of a vortex-induced vibration (VIV) of a circular cylinder at low Reynolds number. The proposed numerical scheme encompasses the fluid dynamics computation in an Eulerian domain where the body is embedded using a collection of markers to describe its shape, and the rigid solid's motion is obtained with the well-known Newton's law. The body's velocity is imposed on the fluid domain through a penalty technique on the embedded fluid-solid interface. The fluid tractions acting on the solid are computed from the fluid dynamic solution of the flow around the body. The resulting forces are considered to solve the solid motion. The numerical code is validated by contrasting the obtained results with those reported in the literature using different approaches for simulating the flow past a fixed circular cylinder as a benchmark problem. Moreover, a mesh convergence analysis is also done providing a satisfactory response. In particular, a VIV problem is analyzed, emphasizing the description of the synchronization phenomenon.

  16. A Study of Fixed-Order Mixed Norm Designs for a Benchmark Problem in Structural Control

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.; Calise, Anthony J.; Hsu, C. C.

    1998-01-01

    This study investigates the use of H2, p-synthesis, and mixed H2/mu methods to construct full-order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodelled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full-order compensators that are robust to both unmodelled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H, design performance levels while providing the same levels of robust stability as the u designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H, designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.

  17. A semi-flexible model prediction for the polymerization force exerted by a living F-actin filament on a fixed wall

    NASA Astrophysics Data System (ADS)

    Pierleoni, Carlo; Ciccotti, Giovanni; Ryckaert, Jean-Paul

    2015-10-01

    We consider a single living semi-flexible filament with persistence length ℓp in chemical equilibrium with a solution of free monomers at fixed monomer chemical potential μ1 and fixed temperature T. While one end of the filament is chemically active with single monomer (de)polymerization steps, the other end is grafted normally to a rigid wall to mimic a rigid network from which the filament under consideration emerges. A second rigid wall, parallel to the grafting wall, is fixed at distance L < < ℓp from the filament seed. In supercritical conditions where monomer density ρ1 is higher than the critical density ρ1c, the filament tends to polymerize and impinges onto the second surface which, in suitable conditions (non-escaping filament regime), stops the filament growth. We first establish the grand-potential Ω(μ1, T, L) of this system treated as an ideal reactive mixture, and derive some general properties, in particular the filament size distribution and the force exerted by the living filament on the obstacle wall. We apply this formalism to the semi-flexible, living, discrete Wormlike chain model with step size d and persistence length ℓp, hitting a hard wall. Explicit properties require the computation of the mean force f ¯ i ( L ) exerted by the wall at L and associated potential f ¯ i ( L ) = - d W i ( L ) / d L on a filament of fixed size i. By original Monte-Carlo calculations for few filament lengths in a wide range of compression, we justify the use of the weak bending universal expressions of Gholami et al. [Phys. Rev. E 74, 041803 (2006)] over the whole non-escaping filament regime. For a filament of size i with contour length Lc = (i - 1) d, this universal form is rapidly growing from zero (non-compression state) to the buckling value f b ( L c , ℓ p ) = /π 2 k B T ℓ p 4 Lc 2 over a compression range much narrower than the size d of a monomer. Employing this universal form for living filaments, we find that the average force exerted by a living filament on a wall at distance L is in practice L independent and very close to the value of the stalling force Fs H = ( k B T / d ) ln ( ρ ˆ 1 ) predicted by Hill, this expression being strictly valid in the rigid filament limit. The average filament force results from the product of the cumulative size fraction x = x ( L , ℓ p , ρ ˆ 1 ) , where the filament is in contact with the wall, times the buckling force on a filament of size Lc ≈ L, namely, Fs H = x f b ( L ; ℓ p ) . The observed L independence of Fs H implies that x ∝ L-2 for given ( ℓ p , ρ ˆ 1 ) and x ∝ ln ρ ˆ 1 for given (ℓp, L). At fixed ( L , ρ ˆ 1 ), one also has x ∝ ℓp - 1 which indicates that the rigid filament limit ℓp → ∞ is a singular limit in which an infinite force has zero weight. Finally, we derive the physically relevant threshold for filament escaping in the case of actin filaments.

  18. Parallel Simulation of Three-Dimensional Free Surface Fluid Flow Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BAER,THOMAS A.; SACKINGER,PHILIP A.; SUBIA,SAMUEL R.

    1999-10-14

    Simulation of viscous three-dimensional fluid flow typically involves a large number of unknowns. When free surfaces are included, the number of unknowns increases dramatically. Consequently, this class of problem is an obvious application of parallel high performance computing. We describe parallel computation of viscous, incompressible, free surface, Newtonian fluid flow problems that include dynamic contact fines. The Galerkin finite element method was used to discretize the fully-coupled governing conservation equations and a ''pseudo-solid'' mesh mapping approach was used to determine the shape of the free surface. In this approach, the finite element mesh is allowed to deform to satisfy quasi-staticmore » solid mechanics equations subject to geometric or kinematic constraints on the boundaries. As a result, nodal displacements must be included in the set of unknowns. Other issues discussed are the proper constraints appearing along the dynamic contact line in three dimensions. Issues affecting efficient parallel simulations include problem decomposition to equally distribute computational work among a SPMD computer and determination of robust, scalable preconditioners for the distributed matrix systems that must be solved. Solution continuation strategies important for serial simulations have an enhanced relevance in a parallel coquting environment due to the difficulty of solving large scale systems. Parallel computations will be demonstrated on an example taken from the coating flow industry: flow in the vicinity of a slot coater edge. This is a three dimensional free surface problem possessing a contact line that advances at the web speed in one region but transitions to static behavior in another region. As such, a significant fraction of the computational time is devoted to processing boundary data. Discussion focuses on parallel speed ups for fixed problem size, a class of problems of immediate practical importance.« less

  19. Fixed Point Problems for Linear Transformations on Pythagorean Triples

    ERIC Educational Resources Information Center

    Zhan, M.-Q.; Tong, J.-C.; Braza, P.

    2006-01-01

    In this article, an attempt is made to find all linear transformations that map a standard Pythagorean triple (a Pythagorean triple [x y z][superscript T] with y being even) into a standard Pythagorean triple, which have [3 4 5][superscript T] as their fixed point. All such transformations form a monoid S* under matrix product. It is found that S*…

  20. A MAP fixed-point, packing-unpacking routine for the IBM 7094 computer

    Treesearch

    Robert S. Helfman

    1966-01-01

    Two MAP (Macro Assembly Program) computer routines for packing and unpacking fixed point data are described. Use of these routines with Fortran IV Programs provides speedy access to quantities of data which far exceed the normal storage capacity of IBM 7000-series computers. Many problems that could not be attempted because of the slow access-speed of tape...

  1. Investigations on effects of the hole size to fix electrodes and interconnection lines in polydimethylsiloxane

    NASA Astrophysics Data System (ADS)

    Behkami, Saber; Frounchi, Javad; Ghaderi Pakdel, Firouz; Stieglitz, Thomas

    2017-11-01

    Translational research in bioelectronics medicine and neural implants often relies on established material assemblies made of silicone rubber (polydimethylsiloxane-PDMS) and precious metals. Longevity of the compound is of utmost importance for implantable devices in therapeutic and rehabilitation applications. Therefore, secure mechanical fixation can be used in addition to chemical bonding mechanisms to interlock PDMS substrate and insulation layers with metal sheets for interconnection lines and electrodes. One of the best ways to fix metal lines and electrodes in PDMS is to design holes in electrode rims to allow for direct interconnection between top to bottom layer silicone. Hence, the best layouts and sizes of holes (up to 6) which provide sufficient stability against lateral and vertical forces have been investigated with a variety of numbers of hole in line electrodes, which are simulated and fabricated with different layouts, sizes and materials. Best stability was obtained with radii of 100, 72 and 62 µm, respectively, and a single central hole in aluminum, platinum and MP35N foil line electrodes of 400  ×  500 µm2 size and of thickness 20 µm. The study showed that the best hole size which provides line electrode immobility (of thickness less than 30 µm) within a central hole is proportional to reverse value of Young’s Modulus of the material used. Thus, an array of line electrodes was designed and fabricated to study this effect. Experimental results were compared with simulation data. Subsequently, an approximation curve was generated as design rule to propose the best radius to fix line electrodes according to the material thickness between 10 and 200 µm using PDMS as substrate material.

  2. The local surface plasmon resonance property and refractive index sensitivity of metal elliptical nano-ring arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Weihua, E-mail: linwh-whu@hotmail.com; Wang, Qian; Dong, Anhua

    2014-11-15

    In this paper, we systematically investigate the optical property and refractive index sensitivity (RIS) of metal elliptical nano-ring (MENR) arranged in rectangle lattice by finite-difference time-domain method. Eight kinds of considered MENRs are divided into three classes, namely fixed at the same outer size, at the same inner size, and at the same middle size. All MENR arrays show a bonding mode local surface plasmon resonance (LSPR) peak in the near-infrared region under longitudinal and transverse polarizations, and lattice diffraction enhanced LSPR peaks emerge, when the LSPR peak wavelength (LSPRPW) matches the effective lattice constant of the array. The LSPRPWmore » is determined by the charge moving path length, the parallel and cross interactions induced by the stable distributed charges, and the moving charges inter-attraction. High RIS can be achieved by small particle distance arrays composed of MENRs with big inner size and small ring-width. On the other hand, for a MENR array, the comprehensive RIS (including RIS and figure of merit) under transverse polarization is superior to that under longitudinal polarization. Furthermore, on condition that compared arrays are fixed at the same lattice constant, the phenomenon that the RIS of big ring-width MENR arrays may be higher than that of small ring-width MENR arrays only appears in the case of compared arrays with relatively small lattice constant and composed of MENRs fixed at the same inner size simultaneously. Meanwhile, the LSPRPW of the former MENR arrays is also larger than that of the latter MENR arrays. Our systematic results may help experimentalists work with this type of systems.« less

  3. Complex Population Dynamics and the Coalescent Under Neutrality

    PubMed Central

    Volz, Erik M.

    2012-01-01

    Estimates of the coalescent effective population size Ne can be poorly correlated with the true population size. The relationship between Ne and the population size is sensitive to the way in which birth and death rates vary over time. The problem of inference is exacerbated when the mechanisms underlying population dynamics are complex and depend on many parameters. In instances where nonparametric estimators of Ne such as the skyline struggle to reproduce the correct demographic history, model-based estimators that can draw on prior information about population size and growth rates may be more efficient. A coalescent model is developed for a large class of populations such that the demographic history is described by a deterministic nonlinear dynamical system of arbitrary dimension. This class of demographic model differs from those typically used in population genetics. Birth and death rates are not fixed, and no assumptions are made regarding the fraction of the population sampled. Furthermore, the population may be structured in such a way that gene copies reproduce both within and across demes. For this large class of models, it is shown how to derive the rate of coalescence, as well as the likelihood of a gene genealogy with heterochronous sampling and labeled taxa, and how to simulate a coalescent tree conditional on a complex demographic history. This theoretical framework encapsulates many of the models used by ecologists and epidemiologists and should facilitate the integration of population genetics with the study of mathematical population dynamics. PMID:22042576

  4. Wavelet and adaptive methods for time dependent problems and applications in aerosol dynamics

    NASA Astrophysics Data System (ADS)

    Guo, Qiang

    Time dependent partial differential equations (PDEs) are widely used as mathematical models of environmental problems. Aerosols are now clearly identified as an important factor in many environmental aspects of climate and radiative forcing processes, as well as in the health effects of air quality. The mathematical models for the aerosol dynamics with respect to size distribution are nonlinear partial differential and integral equations, which describe processes of condensation, coagulation and deposition. Simulating the general aerosol dynamic equations on time, particle size and space exhibits serious difficulties because the size dimension ranges from a few nanometer to several micrometer while the spatial dimension is usually described with kilometers. Therefore, it is an important and challenging task to develop efficient techniques for solving time dependent dynamic equations. In this thesis, we develop and analyze efficient wavelet and adaptive methods for the time dependent dynamic equations on particle size and further apply them to the spatial aerosol dynamic systems. Wavelet Galerkin method is proposed to solve the aerosol dynamic equations on time and particle size due to the fact that aerosol distribution changes strongly along size direction and the wavelet technique can solve it very efficiently. Daubechies' wavelets are considered in the study due to the fact that they possess useful properties like orthogonality, compact support, exact representation of polynomials to a certain degree. Another problem encountered in the solution of the aerosol dynamic equations results from the hyperbolic form due to the condensation growth term. We propose a new characteristic-based fully adaptive multiresolution numerical scheme for solving the aerosol dynamic equation, which combines the attractive advantages of adaptive multiresolution technique and the characteristics method. On the aspect of theoretical analysis, the global existence and uniqueness of solutions of continuous time wavelet numerical methods for the nonlinear aerosol dynamics are proved by using Schauder's fixed point theorem and the variational technique. Optimal error estimates are derived for both continuous and discrete time wavelet Galerkin schemes. We further derive reliable and efficient a posteriori error estimate which is based on stable multiresolution wavelet bases and an adaptive space-time algorithm for efficient solution of linear parabolic differential equations. The adaptive space refinement strategies based on the locality of corresponding multiresolution processes are proved to converge. At last, we develop efficient numerical methods by combining the wavelet methods proposed in previous parts and the splitting technique to solve the spatial aerosol dynamic equations. Wavelet methods along the particle size direction and the upstream finite difference method along the spatial direction are alternately used in each time interval. Numerical experiments are taken to show the effectiveness of our developed methods.

  5. Introduction to human factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winters, J.M.

    Some background is given on the field of human factors. The nature of problems with current human/computer interfaces is discussed, some costs are identified, ideal attributes of graceful system interfaces are outlined, and some reasons are indicated why it's not easy to fix the problems. (LEW)

  6. The effect of Au amount on size uniformity of self-assembled Au nanoparticles

    NASA Astrophysics Data System (ADS)

    Chen, S.-H.; Wang, D.-C.; Chen, G.-Y.; Chen, K.-Y.

    2008-03-01

    The self-assembled fabrication of nanostructure, a dreaming approach in the area of fabrication engineering, is the ultimate goal of this research. A finding was proved through previous research that the size of the self-assembled gold nanoparticles could be controlled with the mole ratio between AuCl4- and thiol. In this study, the moles of Au were fixed, only the moles of thiol were adjusted. Five different mole ratios of Au/S with their effect on size uniformity were investigated. The mole ratios were 1:1/16, 1:1/8, 1:1, 1:8, 1:16, respectively. The size distributions of the gold nanoparticles were analyzed by Mac-View analysis software. HR-TEM was used to derive images of self-assembled gold nanoparticles. The result reached was also the higher the mole ratio between AuCl4- and thiol the bigger the self-assembled gold nanoparticles. Under the condition of moles of Au fixed, the most homogeneous nanoparticles in size distribution derived with the mole ratio of 1:1/8 between AuCl4- and thiol. The obtained nanoparticles could be used, for example, in uniform surface nanofabrication, leading to the fabrication of ordered array of quantum dots.

  7. Lightweight GPS-tags, one giant leap for wildlife tracking? An assessment approach.

    PubMed

    Recio, Mariano R; Mathieu, Renaud; Denys, Paul; Sirguey, Pascal; Seddon, Philip J

    2011-01-01

    Recent technological improvements have made possible the development of lightweight GPS-tagging devices suitable to track medium-to-small sized animals. However, current inferences concerning GPS performance are based on heavier designs, suitable only for large mammals. Lightweight GPS-units are deployed close to the ground, on species selecting micro-topographical features and with different behavioural patterns in comparison to larger mammal species. We assessed the effects of vegetation, topography, motion, and behaviour on the fix success rate for lightweight GPS-collar across a range of natural environments, and at the scale of perception of feral cats (Felis catus). Units deployed at 20 cm above the ground in sites of varied vegetation and topography showed that trees (native forest) and shrub cover had the largest influence on fix success rate (89% on average); whereas tree cover, sky availability, number of satellites and horizontal dilution of position (HDOP) were the main variables affecting location error (±39.5 m and ±27.6 m before and after filtering outlier fixes). Tests on HDOP or number of satellites-based screening methods to remove inaccurate locations achieved only a small reduction of error and discarded many accurate locations. Mobility tests were used to simulate cats' motion, revealing a slightly lower performance as compared to the fixed sites. GPS-collars deployed on 43 cats showed no difference in fix success rate by sex or season. Overall, fix success rate and location error values were within the range of previous tests carried out with collars designed for larger species. Lightweight GPS-tags are a suitable method to track medium to small size species, hence increasing the range of opportunities for spatial ecology research. However, the effects of vegetation, topography and behaviour on location error and fix success rate need to be evaluated prior to deployment, for the particular study species and their habitats.

  8. Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.

    PubMed

    Rochon, K; Scoles, G A; Lysyk, T J

    2012-03-01

    A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.

  9. Solving free-plasma-boundary problems with the SIESTA MHD code

    NASA Astrophysics Data System (ADS)

    Sanchez, R.; Peraza-Rodriguez, H.; Reynolds-Barredo, J. M.; Tribaldos, V.; Geiger, J.; Hirshman, S. P.; Cianciosa, M.

    2017-10-01

    SIESTA is a recently developed MHD equilibrium code designed to perform fast and accurate calculations of ideal MHD equilibria for 3D magnetic configurations. It is an iterative code that uses the solution obtained by the VMEC code to provide a background coordinate system and an initial guess of the solution. The final solution that SIESTA finds can exhibit magnetic islands and stochastic regions. In its original implementation, SIESTA addressed only fixed-boundary problems. This fixed boundary condition somewhat restricts its possible applications. In this contribution we describe a recent extension of SIESTA that enables it to address free-plasma-boundary situations, opening up the possibility of investigating problems with SIESTA in which the plasma boundary is perturbed either externally or internally. As an illustration, the extended version of SIESTA is applied to a configuration of the W7-X stellarator.

  10. Existence of solutions of a two-dimensional boundary value problem for a system of nonlinear equations arising in growing cell populations.

    PubMed

    Jeribi, Aref; Krichen, Bilel; Mefteh, Bilel

    2013-01-01

    In the paper [A. Ben Amar, A. Jeribi, and B. Krichen, Fixed point theorems for block operator matrix and an application to a structured problem under boundary conditions of Rotenberg's model type, to appear in Math. Slovaca. (2014)], the existence of solutions of the two-dimensional boundary value problem (1) and (2) was discussed in the product Banach space L(p)×L(p) for p∈(1, ∞). Due to the lack of compactness on L1 spaces, the analysis did not cover the case p=1. The purpose of this work is to extend the results of Ben Amar et al. to the case p=1 by establishing new variants of fixed-point theorems for a 2×2 operator matrix, involving weakly compact operators.

  11. Grain size effect on Lcr elastic wave for surface stress measurement of carbon steel

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Miao, Wenbing; Dong, Shiyun; He, Peng

    2018-04-01

    Based on critical refraction longitudinal wave (Lcr wave) acoustoelastic theory, correction method for grain size effect on surface stress measurement was discussed in this paper. Two fixed distance Lcr wave transducers were used to collect Lcr wave, and difference in time of flight between Lcr waves was calculated with cross-correlation coefficient function, at last relationship of Lcr wave acoustoelastic coefficient and grain size was obtained. Results show that as grain size increases, propagation velocity of Lcr wave decreases, one cycle is optimal step length for calculating difference in time of flight between Lcr wave. When stress value is within stress turning point, relationship of difference in time of flight between Lcr wave and stress is basically consistent with Lcr wave acoustoelastic theory, while there is a deviation and it is higher gradually as stress increasing. Inhomogeneous elastic plastic deformation because of inhomogeneous microstructure and average value of surface stress in a fixed distance measured with Lcr wave were considered as the two main reasons for above results. As grain size increasing, Lcr wave acoustoelastic coefficient decreases in the form of power function, then correction method for grain size effect on surface stress measurement was proposed. Finally, theoretical discussion was verified by fracture morphology observation.

  12. Validation of two-dimensional and three-dimensional measurements of subpleural alveolar size parameters by optical coherence tomography

    PubMed Central

    Warger, William C.; Hostens, Jeroen; Namati, Eman; Birngruber, Reginald; Bouma, Brett E.; Tearney, Guillermo J.

    2012-01-01

    Abstract. Optical coherence tomography (OCT) has been increasingly used for imaging pulmonary alveoli. Only a few studies, however, have quantified individual alveolar areas, and the validity of alveolar volumes represented within OCT images has not been shown. To validate quantitative measurements of alveoli from OCT images, we compared the cross-sectional area, perimeter, volume, and surface area of matched subpleural alveoli from microcomputed tomography (micro-CT) and OCT images of fixed air-filled swine samples. The relative change in size between different alveoli was extremely well correlated (r>0.9, P<0.0001), but OCT images underestimated absolute sizes compared to micro-CT by 27% (area), 7% (perimeter), 46% (volume), and 25% (surface area) on average. We hypothesized that the differences resulted from refraction at the tissue–air interfaces and developed a ray-tracing model that approximates the reconstructed alveolar size within OCT images. Using this model and OCT measurements of the refractive index for lung tissue (1.41 for fresh, 1.53 for fixed), we derived equations to obtain absolute size measurements of superellipse and circular alveoli with the use of predictive correction factors. These methods and results should enable the quantification of alveolar sizes from OCT images in vivo. PMID:23235834

  13. Validation of two-dimensional and three-dimensional measurements of subpleural alveolar size parameters by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Unglert, Carolin I.; Warger, William C.; Hostens, Jeroen; Namati, Eman; Birngruber, Reginald; Bouma, Brett E.; Tearney, Guillermo J.

    2012-12-01

    Optical coherence tomography (OCT) has been increasingly used for imaging pulmonary alveoli. Only a few studies, however, have quantified individual alveolar areas, and the validity of alveolar volumes represented within OCT images has not been shown. To validate quantitative measurements of alveoli from OCT images, we compared the cross-sectional area, perimeter, volume, and surface area of matched subpleural alveoli from microcomputed tomography (micro-CT) and OCT images of fixed air-filled swine samples. The relative change in size between different alveoli was extremely well correlated (r>0.9, P<0.0001), but OCT images underestimated absolute sizes compared to micro-CT by 27% (area), 7% (perimeter), 46% (volume), and 25% (surface area) on average. We hypothesized that the differences resulted from refraction at the tissue-air interfaces and developed a ray-tracing model that approximates the reconstructed alveolar size within OCT images. Using this model and OCT measurements of the refractive index for lung tissue (1.41 for fresh, 1.53 for fixed), we derived equations to obtain absolute size measurements of superellipse and circular alveoli with the use of predictive correction factors. These methods and results should enable the quantification of alveolar sizes from OCT images in vivo.

  14. On Profit-Maximizing Pricing for the Highway and Tollbooth Problems

    NASA Astrophysics Data System (ADS)

    Elbassioni, Khaled; Raman, Rajiv; Ray, Saurabh; Sitters, René

    In the tollbooth problem on trees, we are given a tree T= (V,E) with n edges, and a set of m customers, each of whom is interested in purchasing a path on the graph. Each customer has a fixed budget, and the objective is to price the edges of T such that the total revenue made by selling the paths to the customers that can afford them is maximized. An important special case of this problem, known as the highway problem, is when T is restricted to be a line. For the tollbooth problem, we present an O(logn)-approximation, improving on the current best O(logm)-approximation. We also study a special case of the tollbooth problem, when all the paths that customers are interested in purchasing go towards a fixed root of T. In this case, we present an algorithm that returns a (1 - ɛ)-approximation, for any ɛ> 0, and runs in quasi-polynomial time. On the other hand, we rule out the existence of an FPTAS by showing that even for the line case, the problem is strongly NP-hard. Finally, we show that in the discount model, when we allow some items to be priced below zero to improve the overall profit, the problem becomes even APX-hard.

  15. Reducing Class Size: What Do We Know?

    ERIC Educational Resources Information Center

    Bascia, Nina

    2010-01-01

    This report provides an overview of findings from the research on primary class size reduction as a strategy to improve student learning. Its purpose is to provide a comprehensive and balanced picture of a very popular educational reform strategy that has often been seen as a "quick fix" for improving students' opportunities to learn in…

  16. Influence of tree spatial pattern and sample plot type and size on inventory

    Treesearch

    John-Pascall Berrill; Kevin L. O' Hara

    2012-01-01

    Sampling with different plot types and sizes was simulated using tree location maps and data collected in three even-aged coast redwood (Sequoia sempervirens) stands selected to represent uniform, random, and clumped spatial patterns of tree locations. Fixed-radius circular plots, belt transects, and variable-radius plots were installed by...

  17. 40 CFR 113.4 - Size classes and associated liability limits for fixed onshore oil storage facilities, 1,000...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Size classes and associated liability... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS LIABILITY LIMITS FOR... privity and knowledge of the owner or operator, the following limits of liability are established for...

  18. Average size of random polygons with fixed knot topology.

    PubMed

    Matsuda, Hiroshi; Yao, Akihisa; Tsukahara, Hiroshi; Deguchi, Tetsuo; Furuta, Ko; Inami, Takeo

    2003-07-01

    We have evaluated by numerical simulation the average size R(K) of random polygons of fixed knot topology K=,3(1),3(1) musical sharp 4(1), and we have confirmed the scaling law R(2)(K) approximately N(2nu(K)) for the number N of polygonal nodes in a wide range; N=100-2200. The best fit gives 2nu(K) approximately 1.11-1.16 with good fitting curves in the whole range of N. The estimate of 2nu(K) is consistent with the exponent of self-avoiding polygons. In a limited range of N (N greater, similar 600), however, we have another fit with 2nu(K) approximately 1.01-1.07, which is close to the exponent of random polygons.

  19. Combinatoric analysis of heterogeneous stochastic self-assembly.

    PubMed

    D'Orsogna, Maria R; Zhao, Bingyu; Berenji, Bijan; Chou, Tom

    2013-09-28

    We analyze a fully stochastic model of heterogeneous nucleation and self-assembly in a closed system with a fixed total particle number M, and a fixed number of seeds Ns. Each seed can bind a maximum of N particles. A discrete master equation for the probability distribution of the cluster sizes is derived and the corresponding cluster concentrations are found using kinetic Monte-Carlo simulations in terms of the density of seeds, the total mass, and the maximum cluster size. In the limit of slow detachment, we also find new analytic expressions and recursion relations for the cluster densities at intermediate times and at equilibrium. Our analytic and numerical findings are compared with those obtained from classical mass-action equations and the discrepancies between the two approaches analyzed.

  20. Measurement and testing problems experienced during FAA's emissions testing of general aviation piston engines

    NASA Technical Reports Server (NTRS)

    Salmon, R. F.; Imbrogno, S.

    1976-01-01

    The importance of measuring accurate air and fuel flows as well as the importance of obtaining accurate exhaust pollutant measurements were emphasized. Some of the problems and the corrective actions taken to incorporate fixes and/or modifications were identified.

  1. Determination of the expansion of the potential of the earth's normal gravitational field

    NASA Astrophysics Data System (ADS)

    Kochiev, A. A.

    The potential of the generalized problem of 2N fixed centers is expanded in a polynomial and Legendre function series. Formulas are derived for the expansion coefficients, and the disturbing function of the problem is constructed in an explicit form.

  2. Graphite distortion ``C`` Reactor. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, N.H.

    1962-02-08

    This report covers the efforts of the Laboratory in an investigation of the graphite distortion in the ``C`` reactor at Hanford. The particular aspects of the problem to be covered by the Laboratory were possible ``fixes`` to the control rod sticking problem caused by VSR channel distortion.

  3. The impact of multiple endpoint dependency on Q and I(2) in meta-analysis.

    PubMed

    Thompson, Christopher Glen; Becker, Betsy Jane

    2014-09-01

    A common assumption in meta-analysis is that effect sizes are independent. When correlated effect sizes are analyzed using traditional univariate techniques, this assumption is violated. This research assesses the impact of dependence arising from treatment-control studies with multiple endpoints on homogeneity measures Q and I(2) in scenarios using the unbiased standardized-mean-difference effect size. Univariate and multivariate meta-analysis methods are examined. Conditions included different overall outcome effects, study sample sizes, numbers of studies, between-outcomes correlations, dependency structures, and ways of computing the correlation. The univariate approach used typical fixed-effects analyses whereas the multivariate approach used generalized least-squares (GLS) estimates of a fixed-effects model, weighted by the inverse variance-covariance matrix. Increased dependence among effect sizes led to increased Type I error rates from univariate models. When effect sizes were strongly dependent, error rates were drastically higher than nominal levels regardless of study sample size and number of studies. In contrast, using GLS estimation to account for multiple-endpoint dependency maintained error rates within nominal levels. Conversely, mean I(2) values were not greatly affected by increased amounts of dependency. Last, we point out that the between-outcomes correlation should be estimated as a pooled within-groups correlation rather than using a full-sample estimator that does not consider treatment/control group membership. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Adjoint-based optimization of PDEs in moving domains

    NASA Astrophysics Data System (ADS)

    Protas, Bartosz; Liao, Wenyuan

    2008-02-01

    In this investigation we address the problem of adjoint-based optimization of PDE systems in moving domains. As an example we consider the one-dimensional heat equation with prescribed boundary temperatures and heat fluxes. We discuss two methods of deriving an adjoint system necessary to obtain a gradient of a cost functional. In the first approach we derive the adjoint system after mapping the problem to a fixed domain, whereas in the second approach we derive the adjoint directly in the moving domain by employing methods of the noncylindrical calculus. We show that the operations of transforming the system from a variable to a fixed domain and deriving the adjoint do not commute and that, while the gradient information contained in both systems is the same, the second approach results in an adjoint problem with a simpler structure which is therefore easier to implement numerically. This approach is then used to solve a moving boundary optimization problem for our model system.

  5. Comparison and assessment of aerial and ground estimates of waterbird colonies

    USGS Publications Warehouse

    Green, M.C.; Luent, M.C.; Michot, T.C.; Jeske, C.W.; Leberg, P.L.

    2008-01-01

    Aerial surveys are often used to quantify sizes of waterbird colonies; however, these surveys would benefit from a better understanding of associated biases. We compared estimates of breeding pairs of waterbirds, in colonies across southern Louisiana, USA, made from the ground, fixed-wing aircraft, and a helicopter. We used a marked-subsample method for ground-counting colonies to obtain estimates of error and visibility bias. We made comparisons over 2 sampling periods: 1) surveys conducted on the same colonies using all 3 methods during 3-11 May 2005 and 2) an expanded fixed-wing and ground-survey comparison conducted over 4 periods (May and Jun, 2004-2005). Estimates from fixed-wing aircraft were approximately 65% higher than those from ground counts for overall estimated number of breeding pairs and for both dark and white-plumaged species. The coefficient of determination between estimates based on ground and fixed-wing aircraft was ???0.40 for most species, and based on the assumption that estimates from the ground were closer to the true count, fixed-wing aerial surveys appeared to overestimate numbers of nesting birds of some species; this bias often increased with the size of the colony. Unlike estimates from fixed-wing aircraft, numbers of nesting pairs made from ground and helicopter surveys were very similar for all species we observed. Ground counts by one observer resulted in underestimated number of breeding pairs by 20% on average. The marked-subsample method provided an estimate of the number of missed nests as well as an estimate of precision. These estimates represent a major advantage of marked-subsample ground counts over aerial methods; however, ground counts are difficult in large or remote colonies. Helicopter surveys and ground counts provide less biased, more precise estimates of breeding pairs than do surveys made from fixed-wing aircraft. We recommend managers employ ground counts using double observers for surveying waterbird colonies when feasible. Fixed-wing aerial surveys may be suitable to determine colony activity and composition of common waterbird species. The most appropriate combination of survey approaches will be based on the need for precise and unbiased estimates, balanced with financial and logistical constraints.

  6. Paraxial design of an optical element with variable focal length and fixed position of principal planes.

    PubMed

    Mikš, Antonín; Novák, Pavel

    2018-05-10

    In this article, we analyze the problem of the paraxial design of an active optical element with variable focal length, which maintains the positions of its principal planes fixed during the change of its optical power. Such optical elements are important in the process of design of complex optical systems (e.g., zoom systems), where the fixed position of principal planes during the change of optical power is essential for the design process. The proposed solution is based on the generalized membrane tunable-focus fluidic lens with several membrane surfaces.

  7. Fixed-time Insemination in Pasture-based Medium-sized Dairy Operations of Northern Germany and an Attempt to Replace GnRH by hCG.

    PubMed

    Marthold, D; Detterer, J; Koenig von Borstel, U; Gauly, M; Holtz, W

    2016-02-01

    A field study was conducted aimed at (i) evaluating the practicability of a fixed-time insemination regime for medium-sized dairy operations of north-western Germany, representative for many regions of Central Europe and (ii) substituting hCG for GnRH as ovulation-inducing agent at the end of a presynch or ovsynch protocol in an attempt to reduce the incidence of premature luteal regression. Cows of two herds synchronized by presynch and two herds synchronized by ovsynch protocol were randomly allotted to three subgroups; in one group ovulation was induced by the GnRH analog buserelin, in another by hCG, whereas a third group remained untreated. The synchronized groups were fixed-time inseminated; the untreated group bred to observed oestrus. Relative to untreated herd mates, pregnancy rate in cows subjected to a presynch protocol with buserelin as ovulation-inducing agent was 74%; for hCG it was 60%. In cows subjected to an ovsynch protocol, the corresponding relative pregnancy rates reached 138% in the case of buserelin and 95% in the case of hCG. Average service interval was shortened by 1 week in the presynch and delayed by 2 weeks in the ovsynch group. It may be concluded that fixed-time insemination of cows synchronized via ovsynch protocol with buserelin as ovulation-inducing agent is practicable and may help improve efficiency and reduce the work load involved with herd management in medium-sized dairy operations. The substitution of hCG for buserelin was found to be not advisable. © 2015 Blackwell Verlag GmbH.

  8. ECS Resignations Raise Questions of Fiscal Health: Leader of State Policy Group Says Problems Can Be Fixed

    ERIC Educational Resources Information Center

    Hoff, David J.

    2006-01-01

    Kathy Christie, senior vice president at the Education Commission of the States (ECS), resigned on May 1, 2006, saying that the Denver-based group faces a financial crisis, and that she doubts the current ECS president can fix it. By the end of the week, the accounting manager had also resigned, expressing similar concerns, and two policy analysts…

  9. Miniaturized double latching solenoid valve

    NASA Technical Reports Server (NTRS)

    Smith, James T. (Inventor)

    2010-01-01

    A valve includes a generally elongate pintle; a spacer having a rounded surface that bears against the pintle; a bulbous tip fixed to the spacer; and a hollow, generally cylindrical collar fixed to the pintle, the collar enclosing the spacer and the tip and including an opening through which a portion of the tip extends, the opening in the collar and interior of the collar being of a size such that the tip floats therein.

  10. Equivalent source modeling of the core magnetic field using magsat data

    NASA Technical Reports Server (NTRS)

    Mayhew, M. A.; Estes, R. H.

    1983-01-01

    Experiments are carried out on fitting the main field using different numbers of equivalent sources arranged in equal area at fixed radii at and inside the core-mantle boundary. In fixing the radius for a given series of runs, the convergence problems that result from the extreme nonlinearity of the problem when dipole positions are allowed to vary are avoided. Results are presented from a comparison between this approach and the standard spherical harmonic approach for modeling the main field in terms of accuracy and computational efficiency. The modeling of the main field with an equivalent dipole representation is found to be comparable to the standard spherical harmonic approach in accuracy. The 32 deg dipole density (42 dipoles) corresponds approximately to an eleventh degree/order spherical harmonic expansion (143 parameters), whereas the 21 dipole density (92 dipoles) corresponds to approximately a seventeenth degree and order expansion (323 parameters). It is pointed out that fixing the dipole positions results in rapid convergence of the dipole solutions for single-epoch models.

  11. Studies in integrated line-and packet-switched computer communication systems

    NASA Astrophysics Data System (ADS)

    Maglaris, B. S.

    1980-06-01

    The problem of efficiently allocating the bandwidth of a trunk to both types of traffic is handled for various system and traffic models. A performance analysis is carried out both for variable and fixed frame schemes. It is shown that variable frame schemes, adjusting the frame length according to the traffic variations, offer better trunk utilization at the cost of the additional hardware and software complexity needed because of the lack of synchronization. An optimization study on the fixed frame schemes follows. The problem of dynamically allocating the fixed frame to both types of traffic is formulated as a Markovian Decision process. It is shown that the movable boundary scheme, suggested for commercial implementations of integrated multiplexors, offers optimal or near optimal performance and simplicity of implementation. Finally, the behavior of the movable boundary integrated scheme is studied for tandem link connections. Under the assumptions made for the line-switched traffic, the forward allocation technique is found to offer the best alternative among different path set-up strategies.

  12. A simple technique to increase profits in wood products marketing

    Treesearch

    George B. Harpole

    1971-01-01

    Mathematical models can be used to solve quickly some simple day-to-day marketing problems. This note explains how a sawmill production manager, who has an essentially fixed-capacity mill, can solve several optimization problems by using pencil and paper, a forecast of market prices, and a simple algorithm. One such problem is to maximize profits in an operating period...

  13. Energy-Efficient Management of Mechanical Ventilation and Relative Humidity in Hot-Humid Climates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Withers, Jr., Charles R.

    2016-12-01

    In hot and humid climates, it is challenging to energy-efficiently maintain indoor RH at acceptable levels while simultaneously providing required ventilation, particularly in high performance low cooling load homes. The fundamental problem with solely relying on fixed capacity central cooling systems to manage moisture during low sensible load periods is that they are oversized for cooler periods of the year despite being 'properly sized' for a very hot design cooling day. The primary goals of this project were to determine the impact of supplementing a central space conditioning system with 1) a supplemental dehumidifier and 2) a ductless mini-split onmore » seasonal energy use and summer peak power use as well as the impact on thermal distribution and humidity control inside a completely furnished lab home that was continuously ventilated in accordance with ASHRAE 62.2-2013.« less

  14. Building America Case Study: Energy Efficient Management of Mechanical Ventilation and Relative Humidity in Hot-Humid Climates, Cocoa, Florida

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-01-01

    In hot and humid climates, it is challenging to energy-efficiently maintain indoor RH at acceptable levels while simultaneously providing required ventilation, particularly in high performance low cooling load homes. The fundamental problem with solely relying on fixed capacity central cooling systems to manage moisture during low sensible load periods is that they are oversized for cooler periods of the year despite being 'properly sized' for a very hot design cooling day. The primary goals of this project were to determine the impact of supplementing a central space conditioning system with 1) a supplemental dehumidifier and 2) a ductless mini-split onmore » seasonal energy use and summer peak power use as well as the impact on thermal distribution and humidity control inside a completely furnished lab home that was continuously ventilated in accordance with ASHRAE 62.2-2013.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, S.; Peng, L.; Bronevetsky, G.

    As HPC systems approach Exascale, their circuit feature will shrink, while their overall size will grow, all at a fixed power limit. These trends imply that soft faults in electronic circuits will become an increasingly significant problem for applications that run on these systems, causing them to occasionally crash or worse, silently return incorrect results. This is motivating extensive work on application resilience to such faults, ranging from generic techniques such as replication or checkpoint/restart to algorithm-specific error detection and resilience techniques. Effective use of such techniques requires a detailed understanding of (1) which vulnerable parts of the application aremore » most worth protecting (2) the performance and resilience impact of fault resilience mechanisms on the application. This paper presents FaultTelescope, a tool that combines these two and generates actionable insights by presenting in an intuitive way application vulnerabilities and impact of fault resilience mechanisms on applications.« less

  16. Order of events matter: comparing discrete models for optimal control of species augmentation.

    PubMed

    Bodine, Erin N; Gross, Louis J; Lenhart, Suzanne

    2012-01-01

    We investigate optimal timing of augmentation of an endangered/threatened species population in a target region by moving individuals from a reserve or captive population. This is formulated as a discrete-time optimal control problem in which augmentation occurs once per time period over a fixed number of time periods. The population model assumes the Allee effect growth functions in both target and reserve populations and the control objective is to maximize the target and reserve population sizes over the time horizon while accounting for costs of augmentation. Two possible orders of events are considered for different life histories of the species relative to augmentation time: move individuals either before or after population growth occurs. The control variable is the proportion of the reserve population to be moved to the target population. We develop solutions and illustrate numerical results which indicate circumstances for which optimal augmentation strategies depend upon the order of events.

  17. Algebraic multigrid preconditioners for two-phase flow in porous media with phase transitions [Algebraic multigrid preconditioners for multiphase flow in porous media with phase transitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bui, Quan M.; Wang, Lu; Osei-Kuffuor, Daniel

    Multiphase flow is a critical process in a wide range of applications, including oil and gas recovery, carbon sequestration, and contaminant remediation. Numerical simulation of multiphase flow requires solving of a large, sparse linear system resulting from the discretization of the partial differential equations modeling the flow. In the case of multiphase multicomponent flow with miscible effect, this is a very challenging task. The problem becomes even more difficult if phase transitions are taken into account. A new approach to handle phase transitions is to formulate the system as a nonlinear complementarity problem (NCP). Unlike in the primary variable switchingmore » technique, the set of primary variables in this approach is fixed even when there is phase transition. Not only does this improve the robustness of the nonlinear solver, it opens up the possibility to use multigrid methods to solve the resulting linear system. The disadvantage of the complementarity approach, however, is that when a phase disappears, the linear system has the structure of a saddle point problem and becomes indefinite, and current algebraic multigrid (AMG) algorithms cannot be applied directly. In this study, we explore the effectiveness of a new multilevel strategy, based on the multigrid reduction technique, to deal with problems of this type. We demonstrate the effectiveness of the method through numerical results for the case of two-phase, two-component flow with phase appearance/disappearance. In conclusion, we also show that the strategy is efficient and scales optimally with problem size.« less

  18. Algebraic multigrid preconditioners for two-phase flow in porous media with phase transitions [Algebraic multigrid preconditioners for multiphase flow in porous media with phase transitions

    DOE PAGES

    Bui, Quan M.; Wang, Lu; Osei-Kuffuor, Daniel

    2018-02-06

    Multiphase flow is a critical process in a wide range of applications, including oil and gas recovery, carbon sequestration, and contaminant remediation. Numerical simulation of multiphase flow requires solving of a large, sparse linear system resulting from the discretization of the partial differential equations modeling the flow. In the case of multiphase multicomponent flow with miscible effect, this is a very challenging task. The problem becomes even more difficult if phase transitions are taken into account. A new approach to handle phase transitions is to formulate the system as a nonlinear complementarity problem (NCP). Unlike in the primary variable switchingmore » technique, the set of primary variables in this approach is fixed even when there is phase transition. Not only does this improve the robustness of the nonlinear solver, it opens up the possibility to use multigrid methods to solve the resulting linear system. The disadvantage of the complementarity approach, however, is that when a phase disappears, the linear system has the structure of a saddle point problem and becomes indefinite, and current algebraic multigrid (AMG) algorithms cannot be applied directly. In this study, we explore the effectiveness of a new multilevel strategy, based on the multigrid reduction technique, to deal with problems of this type. We demonstrate the effectiveness of the method through numerical results for the case of two-phase, two-component flow with phase appearance/disappearance. In conclusion, we also show that the strategy is efficient and scales optimally with problem size.« less

  19. AC electroosmosis in microchannels packed with a porous medium

    NASA Astrophysics Data System (ADS)

    Kang, Yuejun; Yang, Chun; Huang, Xiaoyang

    2004-08-01

    This paper presents a theoretical study on ac-driven electroosmotic flow in both open-end and closed-end microchannels packed with uniform charged spherical microparticles. The time-periodic oscillating electroosmotic flow in an open-end capillary in response to the application of an alternating (ac) electric field is obtained using the Green function approach. The analysis is based on the Carman-Kozeny theory. The backpressure associated with the counter-flow in a closed-end capillary is obtained by analytically solving the modified Brinkman momentum equation. It is demonstrated that in a microchannel with its two ends connected to reservoirs and subject to ambient pressure, the oscillating Darcy velocity profile depends on both the pore size and the excitation frequency; such effects are coupled through an important aspect ratio of the tubule radius to the Stokes penetration depth. For a fixed pore size, the magnitude of the ac electroosmotic flow decreases with increasing frequency. With increasing pore size, however, the magnitude of the maximum velocity shows two different trends with respect to the excitation frequency: it gets higher in the low frequency domain, and gets lower in the high frequency domain. In a microchannel with closed ends, for a fixed excitation frequency, use of smaller packing particles can generate higher backpressure. For a fixed pore size, the backpressure magnitude shows two different trends changing with the excitation frequency. When the excitation frequency is lower than the system characteristic frequency, the backpressure decreases with increasing excitation frequency. When the excitation frequency is higher than the system characteristic frequency, the backpressure increases with increasing excitation frequency.

  20. Defining space use and movements of Canada lynx with global positioning system telemetry

    USGS Publications Warehouse

    Burdett, C.L.; Moen, R.A.; Niemi, G.J.; Mech, L.D.

    2007-01-01

    Space use and movements of Canada lynx (Lynx canadensis) are difficult to study with very-high-frequency radiocollars. We deployed global positioning system (GPS) collars on 11 lynx in Minnesota to study their seasonal space-use patterns. We estimated home ranges with minimum-convex-polygon and fixed-kernel methods and estimated core areas with area/probability curves. Fixed-kernel home ranges of males (range = 29-522 km2) were significantly larger than those of females (range = 5-95 km2) annually and during the denning season. Some male lynx increased movements during March, the month most influenced by breeding activity. Lynx core areas were predicted by the 60% fixed-kernel isopleth in most seasons. The mean core-area size of males (range = 6-190 km2) was significantly larger than that of females (range = 1-19 km2) annually and during denning. Most female lynx were reproductive animals with reduced movements, whereas males often ranged widely between Minnesota and Ontario. Sensitivity analyses examining the effect of location frequency on home-range size suggest that the home-range sizes of breeding females are less sensitive to sample size than those of males. Longer periods between locations decreased home-range and core-area overlap relative to the home range estimated from daily locations. GPS collars improve our understanding of space use and movements by lynx by increasing the spatial extent and temporal frequency of monitoring and allowing home ranges to be estimated over short periods that are relevant to life-history characteristics. ?? 2007 American Society of Mammalogists.

  1. Demonstration of Numerical Equivalence of Ensemble and Spectral Averaging in Electromagnetic Scattering by Random Particulate Media

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Zakharova, Nadezhda T.

    2016-01-01

    The numerically exact superposition T-matrix method is used to model far-field electromagnetic scattering by two types of particulate object. Object 1 is a fixed configuration which consists of N identical spherical particles (with N 200 or 400) quasi-randomly populating a spherical volume V having a median size parameter of 50. Object 2 is a true discrete random medium (DRM) comprising the same number N of particles randomly moving throughout V. The median particle size parameter is fixed at 4. We show that if Object 1 is illuminated by a quasi-monochromatic parallel beam then it generates a typical speckle pattern having no resemblance to the scattering pattern generated by Object 2. However, if Object 1 is illuminated by a parallel polychromatic beam with a 10 bandwidth then it generates a scattering pattern that is largely devoid of speckles and closely reproduces the quasi-monochromatic pattern generated by Object 2. This result serves to illustrate the capacity of the concept of electromagnetic scattering by a DRM to encompass fixed quasi-random particulate samples provided that they are illuminated by polychromatic light.

  2. Do fixed-dose combination pills or unit-of-use packaging improve adherence? A systematic review.

    PubMed Central

    Connor, Jennie; Rafter, Natasha; Rodgers, Anthony

    2004-01-01

    Adequate adherence to medication regimens is central to the successful treatment of communicable and noncommunicable disease. Fixed-dose combination pills and unit-of-use packaging are therapy-related interventions that are designed to simplify medication regimens and so potentially improve adherence. We conducted a systematic review of relevant randomized trials in order to quantify the effects of fixed-dose combination pills and unit-of-use packaging, compared with medications as usually presented, in terms of adherence to treatment and improved outcomes. Only 15 trials met the inclusion criteria; fixed-dose combination pills were investigated in three of these, while unit-of-use packaging was studied in 12 trials. The trials involved treatments for communicable diseases (n = 5), blood pressure lowering medications (n = 3), diabetic patients (n = 1), vitamin supplementation (n = 1) and management of multiple medications by the elderly (n = 5). The results of the trials suggested that there were trends towards improved adherence and/or clinical outcomes in all but three of the trials; this reached statistical significance in four out of seven trials reporting a clinically relevant or intermediate end-point, and in seven out of thirteen trials reporting medication adherence. Measures of outcome were, however, heterogeneous, and interpretation was further limited by methodological issues, particularly small sample size, short duration and loss to follow-up. Overall, the evidence suggests that fixed-dose combination pills and unit-of-use packaging are likely to improve adherence in a range of settings, but the limitations of the available evidence means that uncertainty remains about the size of these benefits. PMID:15654408

  3. Extended Pausing by Humans on Multiple Fixed-Ratio Schedules with Varied Reinforcer Magnitude and Response Requirements

    PubMed Central

    Williams, Dean C; Saunders, Kathryn J; Perone, Michael

    2011-01-01

    We conducted three experiments to reproduce and extend Perone and Courtney's (1992) study of pausing at the beginning of fixed-ratio schedules. In a multiple schedule with unequal amounts of food across two components, they found that pigeons paused longest in the component associated with the smaller amount of food (the lean component), but only when it was preceded by the rich component. In our studies, adults with mild intellectual disabilities responded on a touch-sensitive computer monitor to produce money. In Experiment 1, the multiple-schedule components differed in both response requirement and reinforcer magnitude (i.e., the rich component required fewer responses and produced more money than the lean component). Effects shown with pigeons were reproduced in all 7 participants. In Experiment 2, we removed the stimuli that signaled the two schedule components, and participants' extended pausing was eliminated. In Experiment 3, to assess sensitivity to reinforcer magnitude versus fixed-ratio size, we presented conditions with equal ratio sizes but disparate magnitudes and conditions with equal magnitudes but disparate ratio sizes. Sensitivity to these manipulations was idiosyncratic. The present experiments obtained schedule control in verbally competent human participants and, despite procedural differences, we reproduced findings with animal participants. We showed that pausing is jointly determined by past conditions of reinforcement and stimuli correlated with upcoming conditions. PMID:21541121

  4. Measuring Timber Truck Loads With Image Processing In Paper Mills

    NASA Astrophysics Data System (ADS)

    Silva, M. Santos; Carvalho, Fernando D.; Rodrigues, F. Carvalho; Goncalves, Ana N. R.

    1989-04-01

    The raw material for the paper industry is wood. To have an exact account of the stock of piled sawn tree trunks every truck load entering the plant's stockyard must be measured as to the amount of wood being brought in. Weighting down the trucks has its own problems, mainly, due to the high capacity of the tree trunks to absorb water. This problem is further enhanced when calculations must be made to arrive at the mass of sawn tree trunks which must go into the process of producing a certain quantity of paper pulp. The method presented here is based on two fixed cameras which take the image of the truck load. One takes a view of the trunks in order to get information on the average length of the tree trunks. The other obtains a side view which is digitised and by just discriminating against a grey level the area covered by the tree trunk cross section is measured. A simple arithmetic operation gives the volume of wood in the trunk. The same computer, a PC, will register the trucks particulars is almost independent of weather the wood is wet or dry and it serves trucks of any size.

  5. STARS: A general-purpose finite element computer program for analysis of engineering structures

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1984-01-01

    STARS (Structural Analysis Routines) is primarily an interactive, graphics-oriented, finite-element computer program for analyzing the static, stability, free vibration, and dynamic responses of damped and undamped structures, including rotating systems. The element library consists of one-dimensional (1-D) line elements, two-dimensional (2-D) triangular and quadrilateral shell elements, and three-dimensional (3-D) tetrahedral and hexahedral solid elements. These elements enable the solution of structural problems that include truss, beam, space frame, plane, plate, shell, and solid structures, or any combination thereof. Zero, finite, and interdependent deflection boundary conditions can be implemented by the program. The associated dynamic response analysis capability provides for initial deformation and velocity inputs, whereas the transient excitation may be either forces or accelerations. An effective in-core or out-of-core solution strategy is automatically employed by the program, depending on the size of the problem. Data input may be at random within a data set, and the program offers certain automatic data-generation features. Input data are formatted as an optimal combination of free and fixed formats. Interactive graphics capabilities enable convenient display of nodal deformations, mode shapes, and element stresses.

  6. Conditional Optimal Design in Three- and Four-Level Experiments

    ERIC Educational Resources Information Center

    Hedges, Larry V.; Borenstein, Michael

    2014-01-01

    The precision of estimates of treatment effects in multilevel experiments depends on the sample sizes chosen at each level. It is often desirable to choose sample sizes at each level to obtain the smallest variance for a fixed total cost, that is, to obtain optimal sample allocation. This article extends previous results on optimal allocation to…

  7. The square lattice Ising model on the rectangle II: finite-size scaling limit

    NASA Astrophysics Data System (ADS)

    Hucht, Alfred

    2017-06-01

    Based on the results published recently (Hucht 2017 J. Phys. A: Math. Theor. 50 065201), the universal finite-size contributions to the free energy of the square lattice Ising model on the L× M rectangle, with open boundary conditions in both directions, are calculated exactly in the finite-size scaling limit L, M\\to∞ , T\\to Tc , with fixed temperature scaling variable x\\propto(T/Tc-1)M and fixed aspect ratio ρ\\propto L/M . We derive exponentially fast converging series for the related Casimir potential and Casimir force scaling functions. At the critical point T=Tc we confirm predictions from conformal field theory (Cardy and Peschel 1988 Nucl. Phys. B 300 377, Kleban and Vassileva 1991 J. Phys. A: Math. Gen. 24 3407). The presence of corners and the related corner free energy has dramatic impact on the Casimir scaling functions and leads to a logarithmic divergence of the Casimir potential scaling function at criticality.

  8. Magnetic hyperthermia in water based ferrofluids: Effects of initial susceptibility and size polydispersity on heating efficiency

    NASA Astrophysics Data System (ADS)

    Lahiri, B. B.; Ranoo, Surojit; Muthukumaran, T.; Philip, John

    2018-04-01

    The effects of initial susceptibility and size polydispersity on magnetic hyperthermia efficiency in two water based ferrofluids containing phosphate and TMAOH coated superparamagnetic Fe3O4 nanoparticles were studied. Experiments were performed at a fixed frequency of 126 kHz on four different concentrations of both samples and under different external field amplitudes. It was observed that for field amplitudes beyond 45.0 kAm-1, the maximum temperature rise was in the vicinity of 42°C (hyperthermia limit) which indicated the suitability of the water based ferrofluids for hyperthermia applications. The maximum temperature rise and specific absorption rate were found to vary linearly with square of the applied field amplitudes, in accordance with theoretical predictions. It was further observed that for a fixed sample concentration, specific absorption rate was higher for the phosphate coated samples which was attributed to the higher initial static susceptibility and lower size polydispersity of phosphate coated Fe3O4.

  9. Incorporation of Fixed Installation Costs into Optimization of Groundwater Remediation with a New Efficient Surrogate Nonlinear Mixed Integer Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Shoemaker, Christine; Wan, Ying

    2016-04-01

    Optimization of nonlinear water resources management issues which have a mixture of fixed (e.g. construction cost for a well) and variable (e.g. cost per gallon of water pumped) costs has been not well addressed because prior algorithms for the resulting nonlinear mixed integer problems have required many groundwater simulations (with different configurations of decision variable), especially when the solution space is multimodal. In particular heuristic methods like genetic algorithms have often been used in the water resources area, but they require so many groundwater simulations that only small systems have been solved. Hence there is a need to have a method that reduces the number of expensive groundwater simulations. A recently published algorithm for nonlinear mixed integer programming using surrogates was shown in this study to greatly reduce the computational effort for obtaining accurate answers to problems involving fixed costs for well construction as well as variable costs for pumping because of a substantial reduction in the number of groundwater simulations required to obtain an accurate answer. Results are presented for a US EPA hazardous waste site. The nonlinear mixed integer surrogate algorithm is general and can be used on other problems arising in hydrology with open source codes in Matlab and python ("pySOT" in Bitbucket).

  10. UltraPse: A Universal and Extensible Software Platform for Representing Biological Sequences.

    PubMed

    Du, Pu-Feng; Zhao, Wei; Miao, Yang-Yang; Wei, Le-Yi; Wang, Likun

    2017-11-14

    With the avalanche of biological sequences in public databases, one of the most challenging problems in computational biology is to predict their biological functions and cellular attributes. Most of the existing prediction algorithms can only handle fixed-length numerical vectors. Therefore, it is important to be able to represent biological sequences with various lengths using fixed-length numerical vectors. Although several algorithms, as well as software implementations, have been developed to address this problem, these existing programs can only provide a fixed number of representation modes. Every time a new sequence representation mode is developed, a new program will be needed. In this paper, we propose the UltraPse as a universal software platform for this problem. The function of the UltraPse is not only to generate various existing sequence representation modes, but also to simplify all future programming works in developing novel representation modes. The extensibility of UltraPse is particularly enhanced. It allows the users to define their own representation mode, their own physicochemical properties, or even their own types of biological sequences. Moreover, UltraPse is also the fastest software of its kind. The source code package, as well as the executables for both Linux and Windows platforms, can be downloaded from the GitHub repository.

  11. Getting to Yes.

    ERIC Educational Resources Information Center

    McMahon, Dennis O.

    This report describes a problem-solving approach to grievance settling and negotiations developed in the Brighton, Michigan, school district and inspired by the book, "Getting To Yes," by Roger Fisher and William Ury. In this approach teachers and administrators come to the table not with fixed positions but with problems both sides want…

  12. Radon Q & A. What You Need to Know.

    ERIC Educational Resources Information Center

    Bayham, Chris

    1994-01-01

    Because radon is the second leading cause of lung cancer in this country, the article presents a question and answer sheet on where radon comes from, which buildings are most likely to have radon, how to tell whether there is a problem, and expenses involved in testing and fixing problems. (SM)

  13. A Design Selection Procedure.

    ERIC Educational Resources Information Center

    Kroeker, Leonard P.

    The problem of blocking on a status variable was investigated. The one-way fixed-effects analysis of variance, analysis of covariance, and generalized randomized block designs each treat the blocking problem in a different way. In order to compare these designs, it is necessary to restrict attention to experimental situations in which observations…

  14. Y2K for Librarians: Exactly What You Need To Do.

    ERIC Educational Resources Information Center

    Doering, William

    1999-01-01

    Addresses how libraries can prepare for Y2K problems. Discusses technology that might be affected and equipment that should be examined, difficulty of fixing noncompliant hardware and software, identifying problem areas and developing solutions, and dealing with vendors. Includes a checklist of necessary preparations. (AEF)

  15. A Microcomputer-Based Network Optimization Package.

    DTIC Science & Technology

    1981-09-01

    from either cases a or c as Truncated-Newton directions. It can be shown [Ref. 27] that the TNCG algorithm is globally convergent and capable of...nonzero values of LGB indicate bounds at which arcs are fixed or reversed. Fixed arcs have negative T ( ) while free arcs have positive T ( ) values...Solution of Generalized Network Problems," Working Paper, Department of Finance and Business Economics , School of Business , University of Southern

  16. Identifying Acquisition Patterns of Failure Using Systems Archetypes

    DTIC Science & Technology

    2008-04-02

    OF PAGES 18 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b . ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev...any but the smallest programs, complete path coverage for defect detection is impractical. Adapted from Pressman , R.S., Software Engineering: A...Firefighting” concept from “Past the Tipping Point” Fix S O B Problem Symptom R “Fixes That Fail” – Systems Archetype S Unintended Consequences S

  17. Nonlifting wing-body combinations with certain geometric restraints having minimum wave drag at low supersonic speeds

    NASA Technical Reports Server (NTRS)

    Lomax, Harvard

    1957-01-01

    Several variational problems involving optimum wing and body combinations having minimum wave drag for different kinds of geometrical restraints are analyzed. Particular attention is paid to the effect on the wave drag of shortening the fuselage and, for slender axially symmetric bodies, the effect of fixing the fuselage diameter at several points or even of fixing whole portions of its shape.

  18. Mass Estimation and Its Applications

    DTIC Science & Technology

    2012-02-23

    parameters); e.g., the rect- angular kernel function has fixed width or fixed per unit size. But the rectangular function used in mass has no parameter...MassTER is implemented in JAVA , and we use DBSCAN in WEKA [13] and a version of DENCLUE implemented in R (www.r-project.org) in our empirical evaluation...Proceedings of SIGKDD, 2010, 989-998. [13] I.H. Witten and E. Frank, Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations

  19. From h to p efficiently: optimal implementation strategies for explicit time-dependent problems using the spectral/hp element method

    PubMed Central

    Bolis, A; Cantwell, C D; Kirby, R M; Sherwin, S J

    2014-01-01

    We investigate the relative performance of a second-order Adams–Bashforth scheme and second-order and fourth-order Runge–Kutta schemes when time stepping a 2D linear advection problem discretised using a spectral/hp element technique for a range of different mesh sizes and polynomial orders. Numerical experiments explore the effects of short (two wavelengths) and long (32 wavelengths) time integration for sets of uniform and non-uniform meshes. The choice of time-integration scheme and discretisation together fixes a CFL limit that imposes a restriction on the maximum time step, which can be taken to ensure numerical stability. The number of steps, together with the order of the scheme, affects not only the runtime but also the accuracy of the solution. Through numerical experiments, we systematically highlight the relative effects of spatial resolution and choice of time integration on performance and provide general guidelines on how best to achieve the minimal execution time in order to obtain a prescribed solution accuracy. The significant role played by higher polynomial orders in reducing CPU time while preserving accuracy becomes more evident, especially for uniform meshes, compared with what has been typically considered when studying this type of problem.© 2014. The Authors. International Journal for Numerical Methods in Fluids published by John Wiley & Sons, Ltd. PMID:25892840

  20. Moments of catchment storm area

    NASA Technical Reports Server (NTRS)

    Eagleson, P. S.; Wang, Q.

    1985-01-01

    The portion of a catchment covered by a stationary rainstorm is modeled by the common area of two overlapping circles. Given that rain occurs within the catchment and conditioned by fixed storm and catchment sizes, the first two moments of the distribution of the common area are derived from purely geometrical considerations. The variance of the wetted fraction is shown to peak when the catchment size is equal to the size of the predominant storm. The conditioning on storm size is removed by assuming a probability distribution based upon the observed fractal behavior of cloud and rainstorm areas.

  1. Analysis of Noise Mechanisms in Cell-Size Control.

    PubMed

    Modi, Saurabh; Vargas-Garcia, Cesar Augusto; Ghusinga, Khem Raj; Singh, Abhyudai

    2017-06-06

    At the single-cell level, noise arises from multiple sources, such as inherent stochasticity of biomolecular processes, random partitioning of resources at division, and fluctuations in cellular growth rates. How these diverse noise mechanisms combine to drive variations in cell size within an isoclonal population is not well understood. Here, we investigate the contributions of different noise sources in well-known paradigms of cell-size control, such as adder (division occurs after adding a fixed size from birth), sizer (division occurs after reaching a size threshold), and timer (division occurs after a fixed time from birth). Analysis reveals that variation in cell size is most sensitive to errors in partitioning of volume among daughter cells, and not surprisingly, this process is well regulated among microbes. Moreover, depending on the dominant noise mechanism, different size-control strategies (or a combination of them) provide efficient buffering of size variations. We further explore mixer models of size control, where a timer phase precedes/follows an adder, as has been proposed in Caulobacter crescentus. Although mixing a timer and an adder can sometimes attenuate size variations, it invariably leads to higher-order moments growing unboundedly over time. This results in a power-law distribution for the cell size, with an exponent that depends inversely on the noise in the timer phase. Consistent with theory, we find evidence of power-law statistics in the tail of C. crescentus cell-size distribution, although there is a discrepancy between the observed power-law exponent and that predicted from the noise parameters. The discrepancy, however, is removed after data reveal that the size added by individual newborns in the adder phase itself exhibits power-law statistics. Taken together, this study provides key insights into the role of noise mechanisms in size homeostasis, and suggests an inextricable link between timer-based models of size control and heavy-tailed cell-size distributions. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  2. Evaluation and Testing of the ADVANTG Code on SNM Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.

    2013-09-24

    Pacific Northwest National Laboratory (PNNL) has been tasked with evaluating the effectiveness of ORNL’s new hybrid transport code, ADVANTG, on scenarios of interest to our NA-22 sponsor, specifically of detection of diversion of special nuclear material (SNM). PNNL staff have determined that acquisition and installation of ADVANTG was relatively straightforward for a code in its phase of development, but probably not yet sufficient for mass distribution to the general user. PNNL staff also determined that with little effort, ADVANTG generated weight windows that typically worked for the problems and generated results consistent with MCNP. With slightly greater effort of choosingmore » a finer mesh around detectors or sample reaction tally regions, the figure of merit (FOM) could be further improved in most cases. This does take some limited knowledge of deterministic transport methods. The FOM could also be increased by limiting the energy range for a tally to the energy region of greatest interest. It was then found that an MCNP run with the full energy range for the tally showed improved statistics in the region used for the ADVANTG run. The specific case of interest chosen by the sponsor is the CIPN project from Las Alamos National Laboratory (LANL), which is an active interrogation, non-destructive assay (NDA) technique to quantify the fissile content in a spent fuel assembly and is also sensitive to cases of material diversion. Unfortunately, weight windows for the CIPN problem cannot currently be properly generated with ADVANTG due to inadequate accommodations for source definition. ADVANTG requires that a fixed neutron source be defined within the problem and cannot account for neutron multiplication. As such, it is rendered useless in active interrogation scenarios. It is also interesting to note that this is a difficult problem to solve and that the automated weight windows generator in MCNP actually slowed down the problem. Therefore, PNNL had determined that there is not an effective tool available for speeding up MCNP for problems such as the CIPN scenario. With regard to the Benchmark scenarios, ADVANTG performed very well for most of the difficult, long-running, standard radiation detection scenarios. Specifically, run time speedups were observed for spatially large scenarios, or those having significant shielding or scattering geometries. ADVANTG performed on par with existing codes for moderate sized scenarios, or those with little to moderate shielding, or multiple paths to the detectors. ADVANTG ran slower than MCNP for very simply, spatially small cases with little to no shielding that run very quickly anyway. Lastly, ADVANTG could not solve problems that did not consist of fixed source to detector geometries. For example, it could not solve scenarios with multiple detectors or secondary particles, such as active interrogation, neutron induced gamma, or fission neutrons.« less

  3. Are power calculations useful? A multicentre neuroimaging study

    PubMed Central

    Suckling, John; Henty, Julian; Ecker, Christine; Deoni, Sean C; Lombardo, Michael V; Baron-Cohen, Simon; Jezzard, Peter; Barnes, Anna; Chakrabarti, Bhismadev; Ooi, Cinly; Lai, Meng-Chuan; Williams, Steven C; Murphy, Declan GM; Bullmore, Edward

    2014-01-01

    There are now many reports of imaging experiments with small cohorts of typical participants that precede large-scale, often multicentre studies of psychiatric and neurological disorders. Data from these calibration experiments are sufficient to make estimates of statistical power and predictions of sample size and minimum observable effect sizes. In this technical note, we suggest how previously reported voxel-based power calculations can support decision making in the design, execution and analysis of cross-sectional multicentre imaging studies. The choice of MRI acquisition sequence, distribution of recruitment across acquisition centres, and changes to the registration method applied during data analysis are considered as examples. The consequences of modification are explored in quantitative terms by assessing the impact on sample size for a fixed effect size and detectable effect size for a fixed sample size. The calibration experiment dataset used for illustration was a precursor to the now complete Medical Research Council Autism Imaging Multicentre Study (MRC-AIMS). Validation of the voxel-based power calculations is made by comparing the predicted values from the calibration experiment with those observed in MRC-AIMS. The effect of non-linear mappings during image registration to a standard stereotactic space on the prediction is explored with reference to the amount of local deformation. In summary, power calculations offer a validated, quantitative means of making informed choices on important factors that influence the outcome of studies that consume significant resources. PMID:24644267

  4. Controle du vol longitudinal d'un avion civil avec satisfaction de qualiies de manoeuvrabilite

    NASA Astrophysics Data System (ADS)

    Saussie, David Alexandre

    2010-03-01

    Fulfilling handling qualities still remains a challenging problem during flight control design. These criteria of different nature are derived from a wide experience based upon flight tests and data analysis, and they have to be considered if one expects a good behaviour of the aircraft. The goal of this thesis is to develop synthesis methods able to satisfy these criteria with fixed classical architectures imposed by the manufacturer or with a new flight control architecture. This is applied to the longitudinal flight model of a Bombardier Inc. business jet aircraft, namely the Challenger 604. A first step of our work consists in compiling the most commonly used handling qualities in order to compare them. A special attention is devoted to the dropback criterion for which theoretical analysis leads us to establish a practical formulation for synthesis purpose. Moreover, the comparison of the criteria through a reference model highlighted dominant criteria that, once satisfied, ensure that other ones are satisfied too. Consequently, we are able to consider the fulfillment of these criteria in the fixed control architecture framework. Guardian maps (Saydy et al., 1990) are then considered to handle the problem. Initially for robustness study, they are integrated in various algorithms for controller synthesis. Incidently, this fixed architecture problem is similar to the static output feedback stabilization problem and reduced-order controller synthesis. Algorithms performing stabilization and pole assignment in a specific region of the complex plane are then proposed. Afterwards, they are extended to handle the gain-scheduling problem. The controller is then scheduled through the entire flight envelope with respect to scheduling parameters. Thereafter, the fixed architecture is put aside while only conserving the same output signals. The main idea is to use Hinfinity synthesis to obtain an initial controller satisfying handling qualities thanks to reference model pairing and robust versus mass and center of gravity variations. Using robust modal control (Magni, 2002), we are able to reduce substantially the controller order and to structure it in order to come close to a classical architecture. An auto-scheduling method finally allows us to schedule the controller with respect to scheduling parameters. Two different paths are used to solve the same problem; each one exhibits its own advantages and disadvantages.

  5. Isolation of exosomes by differential centrifugation: Theoretical analysis of a commonly used protocol

    NASA Astrophysics Data System (ADS)

    Livshts, Mikhail A.; Khomyakova, Elena; Evtushenko, Evgeniy G.; Lazarev, Vassili N.; Kulemin, Nikolay A.; Semina, Svetlana E.; Generozov, Edward V.; Govorun, Vadim M.

    2015-11-01

    Exosomes, small (40-100 nm) extracellular membranous vesicles, attract enormous research interest because they are carriers of disease markers and a prospective delivery system for therapeutic agents. Differential centrifugation, the prevalent method of exosome isolation, frequently produces dissimilar and improper results because of the faulty practice of using a common centrifugation protocol with different rotors. Moreover, as recommended by suppliers, adjusting the centrifugation duration according to rotor K-factors does not work for “fixed-angle” rotors. For both types of rotors - “swinging bucket” and “fixed-angle” - we express the theoretically expected proportion of pelleted vesicles of a given size and the “cut-off” size of completely sedimented vesicles as dependent on the centrifugation force and duration and the sedimentation path-lengths. The proper centrifugation conditions can be selected using relatively simple theoretical estimates of the “cut-off” sizes of vesicles. Experimental verification on exosomes isolated from HT29 cell culture supernatant confirmed the main theoretical statements. Measured by the nanoparticle tracking analysis (NTA) technique, the concentration and size distribution of the vesicles after centrifugation agree with those theoretically expected. To simplify this “cut-off”-size-based adjustment of centrifugation protocol for any rotor, we developed a web-calculator.

  6. 15N in tree rings as a bio-indicator of changing nitrogen cycling in tropical forests: an evaluation at three sites using two sampling methods

    PubMed Central

    van der Sleen, Peter; Vlam, Mart; Groenendijk, Peter; Anten, Niels P. R.; Bongers, Frans; Bunyavejchewin, Sarayudh; Hietz, Peter; Pons, Thijs L.; Zuidema, Pieter A.

    2015-01-01

    Anthropogenic nitrogen deposition is currently causing a more than twofold increase of reactive nitrogen input over large areas in the tropics. Elevated 15N abundance (δ15N) in the growth rings of some tropical trees has been hypothesized to reflect an increased leaching of 15N-depleted nitrate from the soil, following anthropogenic nitrogen deposition over the last decades. To find further evidence for altered nitrogen cycling in tropical forests, we measured long-term δ15N values in trees from Bolivia, Cameroon, and Thailand. We used two different sampling methods. In the first, wood samples were taken in a conventional way: from the pith to the bark across the stem of 28 large trees (the “radial” method). In the second, δ15N values were compared across a fixed diameter (the “fixed-diameter” method). We sampled 400 trees that differed widely in size, but measured δ15N in the stem around the same diameter (20 cm dbh) in all trees. As a result, the growth rings formed around this diameter differed in age and allowed a comparison of δ15N values over time with an explicit control for potential size-effects on δ15N values. We found a significant increase of tree-ring δ15N across the stem radius of large trees from Bolivia and Cameroon, but no change in tree-ring δ15N values over time was found in any of the study sites when controlling for tree size. This suggests that radial trends of δ15N values within trees reflect tree ontogeny (size development). However, for the trees from Cameroon and Thailand, a low statistical power in the fixed-diameter method prevents to conclude this with high certainty. For the trees from Bolivia, statistical power in the fixed-diameter method was high, showing that the temporal trend in tree-ring δ15N values in the radial method is primarily caused by tree ontogeny and unlikely by a change in nitrogen cycling. We therefore stress to account for tree size before tree-ring δ15N values can be properly interpreted. PMID:25914707

  7. (15)N in tree rings as a bio-indicator of changing nitrogen cycling in tropical forests: an evaluation at three sites using two sampling methods.

    PubMed

    van der Sleen, Peter; Vlam, Mart; Groenendijk, Peter; Anten, Niels P R; Bongers, Frans; Bunyavejchewin, Sarayudh; Hietz, Peter; Pons, Thijs L; Zuidema, Pieter A

    2015-01-01

    Anthropogenic nitrogen deposition is currently causing a more than twofold increase of reactive nitrogen input over large areas in the tropics. Elevated (15)N abundance (δ(15)N) in the growth rings of some tropical trees has been hypothesized to reflect an increased leaching of (15)N-depleted nitrate from the soil, following anthropogenic nitrogen deposition over the last decades. To find further evidence for altered nitrogen cycling in tropical forests, we measured long-term δ(15)N values in trees from Bolivia, Cameroon, and Thailand. We used two different sampling methods. In the first, wood samples were taken in a conventional way: from the pith to the bark across the stem of 28 large trees (the "radial" method). In the second, δ(15)N values were compared across a fixed diameter (the "fixed-diameter" method). We sampled 400 trees that differed widely in size, but measured δ(15)N in the stem around the same diameter (20 cm dbh) in all trees. As a result, the growth rings formed around this diameter differed in age and allowed a comparison of δ(15)N values over time with an explicit control for potential size-effects on δ(15)N values. We found a significant increase of tree-ring δ(15)N across the stem radius of large trees from Bolivia and Cameroon, but no change in tree-ring δ(15)N values over time was found in any of the study sites when controlling for tree size. This suggests that radial trends of δ(15)N values within trees reflect tree ontogeny (size development). However, for the trees from Cameroon and Thailand, a low statistical power in the fixed-diameter method prevents to conclude this with high certainty. For the trees from Bolivia, statistical power in the fixed-diameter method was high, showing that the temporal trend in tree-ring δ(15)N values in the radial method is primarily caused by tree ontogeny and unlikely by a change in nitrogen cycling. We therefore stress to account for tree size before tree-ring δ(15)N values can be properly interpreted.

  8. Parallel Fixed Point Implementation of a Radial Basis Function Network in an FPGA

    PubMed Central

    de Souza, Alisson C. D.; Fernandes, Marcelo A. C.

    2014-01-01

    This paper proposes a parallel fixed point radial basis function (RBF) artificial neural network (ANN), implemented in a field programmable gate array (FPGA) trained online with a least mean square (LMS) algorithm. The processing time and occupied area were analyzed for various fixed point formats. The problems of precision of the ANN response for nonlinear classification using the XOR gate and interpolation using the sine function were also analyzed in a hardware implementation. The entire project was developed using the System Generator platform (Xilinx), with a Virtex-6 xc6vcx240t-1ff1156 as the target FPGA. PMID:25268918

  9. The importance of fixed costs in animal health systems.

    PubMed

    Tisdell, C A; Adamson, D

    2017-04-01

    In this paper, the authors detail the structure and optimal management of health systems as influenced by the presence and level of fixed costs. Unlike variable costs, fixed costs cannot be altered, and are thus independent of the level of veterinary activity in the short run. Their importance is illustrated by using both single-period and multi-period models. It is shown that multi-stage veterinary decision-making can often be envisaged as a sequence of fixed-cost problems. In general, it becomes clear that, the higher the fixed costs, the greater the net benefit of veterinary activity must be, if such activity is to be economic. The authors also assess the extent to which it pays to reduce fixed costs and to try to compensate for this by increasing variable costs. Fixed costs have major implications for the industrial structure of the animal health products industry and for the structure of the private veterinary services industry. In the former, they favour market concentration and specialisation in the supply of products. In the latter, they foster increased specialisation. While cooperation by individual farmers may help to reduce their individual fixed costs, the organisational difficulties and costs involved in achieving this cooperation can be formidable. In such cases, the only solution is government provision of veterinary services. Moreover, international cooperation may be called for. Fixed costs also influence the nature of the provision of veterinary education.

  10. Scaling in the vicinity of the four-state Potts fixed point

    NASA Astrophysics Data System (ADS)

    Blöte, H. W. J.; Guo, Wenan; Nightingale, M. P.

    2017-08-01

    We study a self-dual generalization of the Baxter-Wu model, employing results obtained by transfer matrix calculations of the magnetic scaling dimension and the free energy. While the pure critical Baxter-Wu model displays the critical behavior of the four-state Potts fixed point in two dimensions, in the sense that logarithmic corrections are absent, the introduction of different couplings in the up- and down triangles moves the model away from this fixed point, so that logarithmic corrections appear. Real couplings move the model into the first-order range, away from the behavior displayed by the nearest-neighbor, four-state Potts model. We also use complex couplings, which bring the model in the opposite direction characterized by the same type of logarithmic corrections as present in the four-state Potts model. Our finite-size analysis confirms in detail the existing renormalization theory describing the immediate vicinity of the four-state Potts fixed point.

  11. fixedTimeEvents: An R package for the distribution of distances between discrete events in fixed time

    NASA Astrophysics Data System (ADS)

    Liland, Kristian Hovde; Snipen, Lars

    When a series of Bernoulli trials occur within a fixed time frame or limited space, it is often interesting to assess if the successful outcomes have occurred completely at random, or if they tend to group together. One example, in genetics, is detecting grouping of genes within a genome. Approximations of the distribution of successes are possible, but they become inaccurate for small sample sizes. In this article, we describe the exact distribution of time between random, non-overlapping successes in discrete time of fixed length. A complete description of the probability mass function, the cumulative distribution function, mean, variance and recurrence relation is included. We propose an associated test for the over-representation of short distances and illustrate the methodology through relevant examples. The theory is implemented in an R package including probability mass, cumulative distribution, quantile function, random number generator, simulation functions, and functions for testing.

  12. Practicality of electronic beam steering for MST/ST radars, part 6.2A

    NASA Technical Reports Server (NTRS)

    Clark, W. L.; Green, J. L.

    1984-01-01

    Electronic beam steering is described as complex and expensive. The Sunset implementation of electronic steering is described, and it is demonstrated that such systems are cost effective, versatile, and no more complex than fixed beam alternatives, provided three or more beams are needed. The problem of determining accurate meteorological wind components in the presence of spatial variation is considered. A cost comparison of steerable and fixed systems allowing solution of this problem is given. The concepts and relations involved in phase steering are given, followed by the description of the Sunset ST radar steering system. The implications are discussed, references to the competing SAD method are provided, and a recommendation concerning the design of the future Doppler ST/MST systems is made.

  13. --No Title--

    Science.gov Websites

    caption-box,.carousel-caption,.fogbox>div{box-sizing:border-box}.fix{background-color:#ff0}.bio -title{color:#5e6a71;font-size:20px;margin-top:0}.topmargin{margin-top:2em}.bottommargin{margin-bottom {position:relative}.caption-box{background:rgba(0,0,0,.8);color:#fff;padding:1em;position:absolute;text-align:left}h3

  14. [Fractal features of soil particle size in the process of desertification in desert grassland of Ningxia, China].

    PubMed

    Yan, Xin; An, Hui

    2017-10-01

    The variation of soil properties, the fractal dimension of soil particle size, and the relationships between fractal dimension of soil particle size and soil properties in the process of desertification in desert grassland of Ningxia were discussed. The results showed that the fractal dimension (D) at different desertification stages in desert grassland varied greatly, the value of D was between 1.69 and 2.62. Except for the 10-20 cm soil layer, the value of D gradually declined with increa sing desertification of desert grassland at 0-30 cm soil layer. In the process of desertification in de-sert grassland, the grassland had the highest values of D , the volume percentage of clay and silt, and the lowest values of the volume percentage of very fine sand and fine sand. However, the mobile dunes had the lowest value of D , the volume percentage of clay and silt, and the highest value of the volume percentage of very fine sand and fine sand. There was a significant positive correlation between the soil fractal dimension value and the volume percentage of soil particles <50 μm, and a significant negative correlation between the soil fractal dimension value and the volume percentage of soil particles >50 μm. The grain size of 50 μm was the critical value for deciding the relationship between the soil particle fractal dimension and the volume percentage. Soil organic matter (SOM) and total nitrogen (TN) decreased gradually with increasing desertification of desert grassland, but soil bulk density increased gradually. Qualitative change from fixed dunes to semi fixed dunes with the rapid decrease of the volume percentage of clay and silt, SOM, TN and the rapid increase of volume percentage of very fine sand and fine sand, soil bulk density. Fractal dimension was significantly correlated to SOM, TN and soil bulk density. Fractal dimension 2.58 was a critical value of fixed dunes and semi fixed dunes. So, the fractal dimension of 2.58 could be taken as the desertification indicator of desert grassland.

  15. Computing eigenfunctions and eigenvalues of boundary-value problems with the orthogonal spectral renormalization method

    NASA Astrophysics Data System (ADS)

    Cartarius, Holger; Musslimani, Ziad H.; Schwarz, Lukas; Wunner, Günter

    2018-03-01

    The spectral renormalization method was introduced in 2005 as an effective way to compute ground states of nonlinear Schrödinger and Gross-Pitaevskii type equations. In this paper, we introduce an orthogonal spectral renormalization (OSR) method to compute ground and excited states (and their respective eigenvalues) of linear and nonlinear eigenvalue problems. The implementation of the algorithm follows four simple steps: (i) reformulate the underlying eigenvalue problem as a fixed-point equation, (ii) introduce a renormalization factor that controls the convergence properties of the iteration, (iii) perform a Gram-Schmidt orthogonalization process in order to prevent the iteration from converging to an unwanted mode, and (iv) compute the solution sought using a fixed-point iteration. The advantages of the OSR scheme over other known methods (such as Newton's and self-consistency) are (i) it allows the flexibility to choose large varieties of initial guesses without diverging, (ii) it is easy to implement especially at higher dimensions, and (iii) it can easily handle problems with complex and random potentials. The OSR method is implemented on benchmark Hermitian linear and nonlinear eigenvalue problems as well as linear and nonlinear non-Hermitian PT -symmetric models.

  16. Dynamic simulation solves process control problem in Oman

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-11-16

    A dynamic simulation study solved the process control problems for a Saih Rawl, Oman, gas compressor station operated by Petroleum Development of Oman (PDO). PDO encountered persistent compressor failure that caused frequent facility shutdowns, oil production deferment, and gas flaring. It commissioned MSE (Consultants) Ltd., U.K., to find a solution for the problem. Saih Rawl, about 40 km from Qarn Alam, produces oil and associated gas from a large number of low and high-pressure wells. Oil and gas are separated in three separators. The oil is pumped to Qarn Alam for treatment and export. Associated gas is compressed in twomore » parallel trains. Train K-1115 is a 350,000 standard cu m/day, four-stage reciprocating compressor driven by a fixed-speed electric motor. Train K-1120 is a 1 million standard cu m/day, four-stage reciprocating compressor driven by a fixed-speed electric motor. Train K-1120 is a 1 million standard cu m/day, four-stage centrifugal compressor driven by a variable-speed motor. The paper describes tripping and surging problems with the gas compressor and the control simplifications that solved the problem.« less

  17. A systematic approach to the control of esthetic form.

    PubMed

    Preston, J D

    1976-04-01

    A systematic, orderly approach to the problem of establishing harmonious phonetics, esthetics, and function in fixed restorations has been described. The system requires an initial investment of time in performing an adequate diagnostic waxing, but recoups that time in many clinical and laboratory procedures. The method has proved a valuable asset in fixed prosthodontic care. The technique can be expanded and combined with other techniques with a little imagination and artistic bent.

  18. Intelligence/Electronic Warfare (IEW) direction-finding and fix estimation analysis report. Volume 2: Trailblazer

    NASA Technical Reports Server (NTRS)

    Gardner, Robert; Gillis, James W.; Griesel, Ann; Pardo, Bruce

    1985-01-01

    An analysis of the direction finding (DF) and fix estimation algorithms in TRAILBLAZER is presented. The TRAILBLAZER software analyzed is old and not currently used in the field. However, the algorithms analyzed are used in other current IEW systems. The underlying algorithm assumptions (including unmodeled errors) are examined along with their appropriateness for TRAILBLAZER. Coding and documentation problems are then discussed. A detailed error budget is presented.

  19. Bolt and nut evaluator

    NASA Technical Reports Server (NTRS)

    Kerley, James J. (Inventor); Burkhardt, Raymond (Inventor); White, Steven (Inventor)

    1994-01-01

    A device for testing fasteners such as nuts and bolts is described which consists of a fixed base plate having a number of threaded and unthreaded holes of varying size for receiving the fasteners to be tested, a torque marking paper taped on top the fixed base plate for marking torque-angle indicia, a torque wrench for applying torque to the fasteners being tested, and an indicator for showing the torque applied to the fastener. These elements provide a low cost, nondestructive device for verifying the strength of bolts and nuts.

  20. Mixed quantum/classical theory of rotationally and vibrationally inelastic scattering in space-fixed and body-fixed reference frames

    NASA Astrophysics Data System (ADS)

    Semenov, Alexander; Babikov, Dmitri

    2013-11-01

    We formulated the mixed quantum/classical theory for rotationally and vibrationally inelastic scattering process in the diatomic molecule + atom system. Two versions of theory are presented, first in the space-fixed and second in the body-fixed reference frame. First version is easy to derive and the resultant equations of motion are transparent, but the state-to-state transition matrix is complex-valued and dense. Such calculations may be computationally demanding for heavier molecules and/or higher temperatures, when the number of accessible channels becomes large. In contrast, the second version of theory requires some tedious derivations and the final equations of motion are rather complicated (not particularly intuitive). However, the state-to-state transitions are driven by real-valued sparse matrixes of much smaller size. Thus, this formulation is the method of choice from the computational point of view, while the space-fixed formulation can serve as a test of the body-fixed equations of motion, and the code. Rigorous numerical tests were carried out for a model system to ensure that all equations, matrixes, and computer codes in both formulations are correct.

  1. Dose Rationalization of Pembrolizumab and Nivolumab Using Pharmacokinetic Modeling and Simulation and Cost Analysis.

    PubMed

    Ogungbenro, Kayode; Patel, Alkesh; Duncombe, Robert; Nuttall, Richard; Clark, James; Lorigan, Paul

    2018-04-01

    Pembrolizumab and nivolumab are highly selective anti-programmed cell death 1 (PD-1) antibodies approved for the treatment of advanced malignancies. Variable exposure and significant wastage have been associated with body size dosing of monoclonal antibodies (mAbs). The following dosing strategies were evaluated using simulations: body weight, dose banding, fixed dose, and pharmacokinetic (PK)-based methods. The relative cost to body weight dosing for band, fixed 150 mg and 200 mg, and PK-derived strategies were -15%, -25%, + 7%, and -16% for pembrolizumab and -8%, -6%, and -10% for band, fixed, and PK-derived strategies for nivolumab, respectively. Relative to mg/kg doses, the median exposures were -1.0%, -4.6%, + 27.1%, and +3.0% for band, fixed 150 mg, fixed 200 mg, and PK-derived strategies, respectively, for pembrolizumab and -3.1%, + 1.9%, and +1.4% for band, fixed 240 mg, and PK-derived strategies, respectively, for nivolumab. Significant wastage can be reduced by alternative dosing strategies without compromising exposure and efficacy. © 2017 American Society for Clinical Pharmacology and Therapeutics.

  2. Multicenter evaluation of a synthetic single-crystal diamond detector for CyberKnife small field size output factors.

    PubMed

    Russo, Serenella; Masi, Laura; Francescon, Paolo; Frassanito, Maria Cristina; Fumagalli, Maria Luisa; Marinelli, Marco; Falco, Maria Daniela; Martinotti, Anna Stefania; Pimpinella, Maria; Reggiori, Giacomo; Verona Rinati, Gianluca; Vigorito, Sabrina; Mancosu, Pietro

    2016-04-01

    The aim of the present work was to evaluate small field size output factors (OFs) using the latest diamond detector commercially available, PTW-60019 microDiamond, over different CyberKnife systems. OFs were measured also by silicon detectors routinely used by each center, considered as reference. Five Italian CyberKnife centers performed OFs measurements for field sizes ranging from 5 to 60mm, defined by fixed circular collimators (5 centers) and by Iris(™) variable aperture collimator (4 centers). Setup conditions were: 80cm source to detector distance, and 1.5cm depth in water. To speed up measurements two diamond detectors were used and their equivalence was evaluated. MonteCarlo (MC) correction factors for silicon detectors were used for comparing the OF measurements. Considering OFs values averaged over all centers, diamond data resulted lower than uncorrected silicon diode ones. The agreement between diamond and MC corrected silicon values was within 0.6% for all fixed circular collimators. Relative differences between microDiamond and MC corrected silicon diodes data for Iris(™) collimator were lower than 1.0% for all apertures in the totality of centers. The two microDiamond detectors showed similar characteristics, in agreement with the technical specifications. Excellent agreement between microDiamond and MC corrected silicon diode detectors OFs was obtained for both collimation systems fixed cones and Iris(™), demonstrating the microDiamond could be a suitable detector for CyberKnife commissioning and routine checks. These results obtained in five centers suggest that for CyberKnife systems microDiamond can be used without corrections even at the smallest field size. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  3. results obtained by the application of two different methods for the calculation of optimal coplanar orbital maneuvers with time limit

    NASA Astrophysics Data System (ADS)

    Rocco, Emr; Prado, Afbap; Souza, Mlos

    In this work, the problem of bi-impulsive orbital transfers between coplanar elliptical orbits with minimum fuel consumption but with a time limit for this transfer is studied. As a first method, the equations presented by Lawden (1993) were used. Those equations furnishes the optimal transfer orbit with fixed time for this transfer, between two elliptical coplanar orbits considering fixed terminal points. The method was adapted to cases with free terminal points and those equations was solved to develop a software for orbital maneuvers. As a second method, the equations presented by Eckel and Vinh (1984) were used, those equations provide the transfer orbit between non-coplanar elliptical orbits with minimum fuel and fixed time transfer, or minimum time transfer for a prescribed fuel consumption, considering free terminal points. But in this work only the problem with fixed time transfer was considered, the case of minimum time for a prescribed fuel consumption was already studied in Rocco et al. (2000). Then, the method was modified to consider cases of coplanar orbital transfer, and develop a software for orbital maneuvers. Therefore, two software that solve the same problem using different methods were developed. The first method, presented by Lawden, uses the primer vector theory. The second method, presented by Eckel and Vinh, uses the ordinary theory of maxima and minima. So, to test the methods we choose the same terminal orbits and the same time as input. We could verify that we didn't obtain exactly the same result. In this work, that is an extension of Rocco et al. (2002), these differences in the results are explored with objective of determining the reason of the occurrence of these differences and which modifications should be done to eliminate them.

  4. Beyond Deficit: Graduate Student Research-Writing Pedagogies

    ERIC Educational Resources Information Center

    Badenhorst, Cecile; Moloney, Cecilia; Rosales, Janna; Dyer, Jennifer; Ru, Lina

    2015-01-01

    Graduate writing is receiving increasing attention, particularly in contexts of diverse student bodies and widening access to universities. In many of these contexts, writing is seen as "a problem" in need of fixing. Often, the problem and the solution are perceived as being solely located in notions of deficit in individuals and not in…

  5. Examining the Impact of Adaptively Faded Worked Examples on Student Learning Outcomes

    ERIC Educational Resources Information Center

    Flores, Raymond; Inan, Fethi

    2014-01-01

    The purpose of this study was to explore effective ways to design guided practices within a web-based mathematics problem solving tutorial. Specifically, this study examined student learning outcome differences between two support designs (e.g. adaptively faded and fixed). In the adaptively faded design, students were presented with problems in…

  6. Bending Back on High School Programs for Youth with Learning Disabilities

    ERIC Educational Resources Information Center

    Edgar, Eugene

    2005-01-01

    In this opinion piece, the author views several major problems facing those who care about students labeled has having learning disabilities (LD). He believes that while there are technical problems that educators should be able to fix (definition of LD, best instructional practices for students so identified, powerful secondary programs that…

  7. Earthquakes Threaten Many American Schools

    ERIC Educational Resources Information Center

    Bailey, Nancy E.

    2010-01-01

    Millions of U.S. children attend schools that are not safe from earthquakes, even though they are in earthquake-prone zones. Several cities and states have worked to identify and repair unsafe buildings, but many others have done little or nothing to fix the problem. The reasons for ignoring the problem include political and financial ones, but…

  8. Finite-time and fixed-time leader-following consensus for multi-agent systems with discontinuous inherent dynamics

    NASA Astrophysics Data System (ADS)

    Ning, Boda; Jin, Jiong; Zheng, Jinchuan; Man, Zhihong

    2018-06-01

    This paper is concerned with finite-time and fixed-time consensus of multi-agent systems in a leader-following framework. Different from conventional leader-following tracking approaches where inherent dynamics satisfying the Lipschitz continuous condition is required, a more generalised case is investigated: discontinuous inherent dynamics. By nonsmooth techniques, a nonlinear protocol is first proposed to achieve the finite-time leader-following consensus. Then, based on fixed-time stability strategies, the fixed-time leader-following consensus problem is solved. An upper bound of settling time is obtained by using a new protocol, and such a bound is independent of initial states, thereby providing additional options for designers in practical scenarios where initial conditions are unavailable. Finally, numerical simulations are provided to demonstrate the effectiveness of the theoretical results.

  9. Simulations of string vibrations with boundary conditions of third kind using the functional transformation method

    NASA Astrophysics Data System (ADS)

    Trautmann, L.; Petrausch, S.; Bauer, M.

    2005-09-01

    The functional transformation method (FTM) is an established mathematical method for accurate simulation of multidimensional physical systems from various fields of science, including optics, heat and mass transfer, electrical engineering, and acoustics. It is a frequency-domain method based on the decomposition into eigenvectors and eigenfrequencies of the underlying physical problem. In this article, the FTM is applied to real-time simulations of vibrating strings which are ideally fixed at one end while the fixing at the other end is modeled by a frequency-dependent input impedance. Thus, boundary conditions of third kind are applied to the model at the end fixed with the input impedance. It is shown that accurate and stable simulations are achieved with nearly the same computational cost as with strings ideally fixed at both ends.

  10. Robust Control Design via Linear Programming

    NASA Technical Reports Server (NTRS)

    Keel, L. H.; Bhattacharyya, S. P.

    1998-01-01

    This paper deals with the problem of synthesizing or designing a feedback controller of fixed dynamic order. The closed loop specifications considered here are given in terms of a target performance vector representing a desired set of closed loop transfer functions connecting various signals. In general these point targets are unattainable with a fixed order controller. By enlarging the target from a fixed point set to an interval set the solvability conditions with a fixed order controller are relaxed and a solution is more easily enabled. Results from the parametric robust control literature can be used to design the interval target family so that the performance deterioration is acceptable, even when plant uncertainty is present. It is shown that it is possible to devise a computationally simple linear programming approach that attempts to meet the desired closed loop specifications.

  11. Customer-centered problem solving.

    PubMed

    Samelson, Q B

    1999-11-01

    If there is no single best way to attract new customers and retain current customers, there is surely an easy way to lose them: fail to solve the problems that arise in nearly every buyer-supplier relationship, or solve them in an unsatisfactory manner. Yet, all too frequently, companies do just that. Either we deny that a problem exists, we exert all our efforts to pin the blame elsewhere, or we "Band-Aid" the problem instead of fixing it, almost guaranteeing that we will face it again and again.

  12. Continuous-variable quantum cryptography is secure against non-Gaussian attacks.

    PubMed

    Grosshans, Frédéric; Cerf, Nicolas J

    2004-01-30

    A general study of arbitrary finite-size coherent attacks against continuous-variable quantum cryptographic schemes is presented. It is shown that, if the size of the blocks that can be coherently attacked by an eavesdropper is fixed and much smaller than the key size, then the optimal attack for a given signal-to-noise ratio in the transmission line is an individual Gaussian attack. Consequently, non-Gaussian coherent attacks do not need to be considered in the security analysis of such quantum cryptosystems.

  13. Influence of fragment size and postoperative joint congruency on long-term outcome of posterior malleolar fractures.

    PubMed

    Drijfhout van Hooff, Cornelis Christiaan; Verhage, Samuel Marinus; Hoogendoorn, Jochem Maarten

    2015-06-01

    One of the factors contributing to long-term outcome of posterior malleolar fractures is the development of osteoarthritis. Based on biomechanical, cadaveric, and small population studies, fixation of posterior malleolar fracture fragments (PMFFs) is usually performed when fragment size exceeds 25-33%. However, the influence of fragment size on long-term clinical and radiological outcome size remains unclear. A retrospective cohort study of 131 patients treated for an isolated ankle fracture with involvement of the posterior malleolus was performed. Mean follow-up was 6.9 (range, 2.5-15.9) years. Patients were divided into groups depending on size of the fragment, small (<5%, n = 20), medium (5-25%, n = 86), or large (>25%, n = 25), and presence of step-off after operative treatment. We have compared functional outcome measures (AOFAS, AAOS), pain (VAS), and dorsiflexion restriction compared to the contralateral ankle and the incidence of osteoarthritis on X-ray. There were no nonunions, 56% of patients had no radiographic osteoarthritis, VAS was 10 of 100, and median clinical score was 90 of 100. More osteoarthritis occurred in ankle fractures with medium and large PMFFs compared to small fragments (small 16%, medium 48%, large 54%; P = .006). Also when comparing small with medium-sized fragments (P = .02), larger fragment size did not lead to a significantly decreased function (median AOFAS 95 vs 88, P = .16). If the PMFF size was >5%, osteoarthritis occurred more frequently when there was a postoperative step-off ≥1 mm in the tibiotalar joint surface (41% vs 61%, P = .02) (whether the posterior fragment had been fixed or not). In this group, fixing the PMFF did not influence development of osteoarthritis. However, in 42% of the cases with fixation of the fragment a postoperative step-off remained (vs 45% in the group without fixation). Osteoarthritis is 1 component of long-term outcome of malleolar fractures, and the results of this study demonstrate that there was more radiographic osteoarthritis in patients with medium and large posterior fragments than in those with small fragments. Radiographic osteoarthritis also occurred more frequently when postoperative step-off was 1 mm or more, whether the posterior fragment was fixed or not. However, clinical scores were not different for these groups. Level IV, retrospective case series. © The Author(s) 2015.

  14. High dimensional linear regression models under long memory dependence and measurement error

    NASA Astrophysics Data System (ADS)

    Kaul, Abhishek

    This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the dimensionality can grow exponentially with the sample size. In the fixed dimensional setting we provide the oracle properties associated with the proposed estimators. In the high dimensional setting, we provide bounds for the statistical error associated with the estimation, that hold with asymptotic probability 1, thereby providing the ℓ1-consistency of the proposed estimator. We also establish the model selection consistency in terms of the correctly estimated zero components of the parameter vector. A simulation study that investigates the finite sample accuracy of the proposed estimator is also included in this chapter.

  15. Meta-analysis in evidence-based healthcare: a paradigm shift away from random effects is overdue.

    PubMed

    Doi, Suhail A R; Furuya-Kanamori, Luis; Thalib, Lukman; Barendregt, Jan J

    2017-12-01

    Each year up to 20 000 systematic reviews and meta-analyses are published whose results influence healthcare decisions, thus making the robustness and reliability of meta-analytic methods one of the world's top clinical and public health priorities. The evidence synthesis makes use of either fixed-effect or random-effects statistical methods. The fixed-effect method has largely been replaced by the random-effects method as heterogeneity of study effects led to poor error estimation. However, despite the widespread use and acceptance of the random-effects method to correct this, it too remains unsatisfactory and continues to suffer from defective error estimation, posing a serious threat to decision-making in evidence-based clinical and public health practice. We discuss here the problem with the random-effects approach and demonstrate that there exist better estimators under the fixed-effect model framework that can achieve optimal error estimation. We argue for an urgent return to the earlier framework with updates that address these problems and conclude that doing so can markedly improve the reliability of meta-analytical findings and thus decision-making in healthcare.

  16. Fixed-Order Mixed Norm Designs for Building Vibration Control

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.; Calise, Anthony J.

    2000-01-01

    This study investigates the use of H2, mu-synthesis, and mixed H2/mu methods to construct full order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodeled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full order compensators that are robust to both unmodeled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H2 design performance levels while providing the same levels of robust stability as the mu designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H2 designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.

  17. Impact of ageing on problem size and proactive interference in arithmetic facts solving.

    PubMed

    Archambeau, Kim; De Visscher, Alice; Noël, Marie-Pascale; Gevers, Wim

    2018-02-01

    Arithmetic facts (AFs) are required when solving problems such as "3 × 4" and refer to calculations for which the correct answer is retrieved from memory. Currently, two important effects that modulate the performance in AFs have been highlighted: the problem size effect and the proactive interference effect. The aim of this study is to investigate possible age-related changes of the problem size effect and the proactive interference effect in AF solving. To this end, the performance of young and older adults was compared in a multiplication production task. Furthermore, an independent measure of proactive interference was assessed to further define the architecture underlying this effect in multiplication solving. The results indicate that both young and older adults were sensitive to the effects of interference and of the problem size. That is, both interference and problem size affected performance negatively: the time needed to solve a multiplication problem increases as the level of interference and the size of the problem increase. Regarding the effect of ageing, the problem size effect remains constant with age, indicating a preserved AF network in older adults. Interestingly, sensitivity to proactive interference in multiplication solving was less pronounced in older than in younger adults suggesting that part of the proactive interference has been overcome with age.

  18. Description of CASCOMP Comprehensive Airship Sizing and Performance Computer Program, Volume 2

    NASA Technical Reports Server (NTRS)

    Davis, J.

    1975-01-01

    The computer program CASCOMP, which may be used in comparative design studies of lighter than air vehicles by rapidly providing airship size and mission performance data, was prepared and documented. The program can be used to define design requirements such as weight breakdown, required propulsive power, and physical dimensions of airships which are designed to meet specified mission requirements. The program is also useful in sensitivity studies involving both design trade-offs and performance trade-offs. The input to the program primarily consists of a series of single point values such as hull overall fineness ratio, number of engines, airship hull and empennage drag coefficients, description of the mission profile, and weights of fixed equipment, fixed useful load and payload. In order to minimize computation time, the program makes ample use of optional computation paths.

  19. From particle condensation to polymer aggregation

    NASA Astrophysics Data System (ADS)

    Janke, Wolfhard; Zierenberg, Johannes

    2018-01-01

    We draw an analogy between droplet formation in dilute particle and polymer systems. Our arguments are based on finite-size scaling results from studies of a two-dimensional lattice gas to three-dimensional bead-spring polymers. To set the results in perspective, we compare with in part rigorous theoretical scaling laws for canonical condensation in a supersaturated gas at fixed temperature, and derive corresponding scaling predictions for an undercooled gas at fixed density. The latter allows one to efficiently employ parallel multicanonical simulations and to reach previously not accessible scaling regimes. While the asymptotic scaling can not be observed for the comparably small polymer system sizes, they demonstrate an intermediate scaling regime also observable for particle condensation. Altogether, our extensive results from computer simulations provide clear evidence for the close analogy between particle condensation and polymer aggregation in dilute systems.

  20. No difference in joint awareness after mobile- and fixed-bearing total knee arthroplasty: 3-year follow-up of a randomized controlled trial.

    PubMed

    Schotanus, M G M; Pilot, P; Vos, R; Kort, N P

    2017-12-01

    To compare the patients ability to forget the artificial knee joint in everyday life who were randomized to be operated for mobile- or fixed-bearing total knee arthroplasty (TKA). This single-center randomized controlled trial evaluated the 3-year follow-up of the cemented mobile- and fixed-bearing TKA from the same brand in a series of 41 patients. Clinical examination was during the pre-, 6-week, 6-month, 1-, 2- and 3-year follow-up containing multiple patient-reported outcome measures (PROMs) including the 12-item Forgotten Joint Score (FJS-12) at 3 years. Effect size was calculated for each PROM at 3-year follow-up to quantify the size of the difference between both bearings. At 3-year follow-up, general linear mixed model analysis showed that there were no significant or clinically relevant differences between the two groups for all outcome measures. Calculated effect sizes were small (<0.3) for all the PROMs except for the FJS-12; these were moderate (0.5). The results of this study demonstrate that joint awareness was slightly lower in patients operated with the MB TKA with comparable improved clinical outcome and PROMs at 3-year follow-up. Measuring joint awareness with the FJS-12 is useful and provides more stringent information at 3-year follow-up compared to other PROMs and should be the PROM of choice at each follow-up after TKA. Level I, randomized controlled trial.

  1. An Investigation into Solution Verification for CFD-DEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fullmer, William D.; Musser, Jordan

    This report presents the study of the convergence behavior of the computational fluid dynamicsdiscrete element method (CFD-DEM) method, specifically National Energy Technology Laboratory’s (NETL) open source MFiX code (MFiX-DEM) with a diffusion based particle-tocontinuum filtering scheme. In particular, this study focused on determining if the numerical method had a solution in the high-resolution limit where the grid size is smaller than the particle size. To address this uncertainty, fixed particle beds of two primary configurations were studied: i) fictitious beds where the particles are seeded with a random particle generator, and ii) instantaneous snapshots from a transient simulation of anmore » experimentally relevant problem. Both problems considered a uniform inlet boundary and a pressure outflow. The CFD grid was refined from a few particle diameters down to 1/6 th of a particle diameter. The pressure drop between two vertical elevations, averaged across the bed cross-section was considered as the system response quantity of interest. A least-squares regression method was used to extrapolate the grid-dependent results to an approximate “grid-free” solution in the limit of infinite resolution. The results show that the diffusion based scheme does yield a converging solution. However, the convergence is more complicated than encountered in simpler, single-phase flow problems showing strong oscillations and, at times, oscillations superimposed on top of globally non-monotonic behavior. The challenging convergence behavior highlights the importance of using at least four grid resolutions in solution verification problems so that (over-determined) regression-based extrapolation methods may be applied to approximate the grid-free solution. The grid-free solution is very important in solution verification and VVUQ exercise in general as the difference between it and the reference solution largely determines the numerical uncertainty. By testing different randomized particle configurations of the same general problem (for the fictitious case) or different instances of freezing a transient simulation, the numerical uncertainties appeared to be on the same order of magnitude as ensemble or time averaging uncertainties. By testing different drag laws, almost all cases studied show that model form uncertainty in this one, very important closure relation was larger than the numerical uncertainty, at least with a reasonable CFD grid, roughly five particle diameters. In this study, the diffusion width (filtering length scale) was mostly set at a constant of six particle diameters. A few exploratory tests were performed to show that similar convergence behavior was observed for diffusion widths greater than approximately two particle diameters. However, this subject was not investigated in great detail because determining an appropriate filter size is really a validation question which must be determined by comparison to experimental or highly accurate numerical data. Future studies are being considered targeting solution verification of transient simulations as well as validation of the filter size with direct numerical simulation data.« less

  2. Nanoscale imaging of whole cells using a liquid enclosure and a scanning transmission electron microscope.

    PubMed

    Peckys, Diana B; Veith, Gabriel M; Joy, David C; de Jonge, Niels

    2009-12-14

    Nanoscale imaging techniques are needed to investigate cellular function at the level of individual proteins and to study the interaction of nanomaterials with biological systems. We imaged whole fixed cells in liquid state with a scanning transmission electron microscope (STEM) using a micrometer-sized liquid enclosure with electron transparent windows providing a wet specimen environment. Wet-STEM images were obtained of fixed E. coli bacteria labeled with gold nanoparticles attached to surface membrane proteins. Mammalian cells (COS7) were incubated with gold-tagged epidermal growth factor and fixed. STEM imaging of these cells resulted in a resolution of 3 nm for the gold nanoparticles. The wet-STEM method has several advantages over conventional imaging techniques. Most important is the capability to image whole fixed cells in a wet environment with nanometer resolution, which can be used, e.g., to map individual protein distributions in/on whole cells. The sample preparation is compatible with that used for fluorescent microscopy on fixed cells for experiments involving nanoparticles. Thirdly, the system is rather simple and involves only minimal new equipment in an electron microscopy (EM) laboratory.

  3. The neural bases of the multiplication problem-size effect across countries

    PubMed Central

    Prado, Jérôme; Lu, Jiayan; Liu, Li; Dong, Qi; Zhou, Xinlin; Booth, James R.

    2013-01-01

    Multiplication problems involving large numbers (e.g., 9 × 8) are more difficult to solve than problems involving small numbers (e.g., 2 × 3). Behavioral research indicates that this problem-size effect might be due to different factors across countries and educational systems. However, there is no neuroimaging evidence supporting this hypothesis. Here, we compared the neural correlates of the multiplication problem-size effect in adults educated in China and the United States. We found a greater neural problem-size effect in Chinese than American participants in bilateral superior temporal regions associated with phonological processing. However, we found a greater neural problem-size effect in American than Chinese participants in right intra-parietal sulcus (IPS) associated with calculation procedures. Therefore, while the multiplication problem-size effect might be a verbal retrieval effect in Chinese as compared to American participants, it may instead stem from the use of calculation procedures in American as compared to Chinese participants. Our results indicate that differences in educational practices might affect the neural bases of symbolic arithmetic. PMID:23717274

  4. Relating the defect band gap and the density functional band gap

    NASA Astrophysics Data System (ADS)

    Schultz, Peter; Edwards, Arthur

    2014-03-01

    Density functional theory (DFT) is an important tool to probe the physics of materials. The Kohn-Sham (KS) gap in DFT is typically (much) smaller than the observed band gap for materials in nature, the infamous ``band gap problem.'' Accurate prediction of defect energy levels is often claimed to be a casualty--the band gap defines the energy scale for defect levels. By applying rigorous control of boundary conditions in size-converged supercell calculations, however, we compute defect levels in Si and GaAs with accuracies of ~0.1 eV, across the full gap, unhampered by a band gap problem. Using GaAs as a theoretical laboratory, we show that the defect band gap--the span of computed defect levels--is insensitive to variations in the KS gap (with functional and pseudopotential), these KS gaps ranging from 0.1 to 1.1 eV. The defect gap matches the experimental 1.52 eV gap. The computed defect gaps for several other III-V, II-VI, I-VII, and other compounds also agree with the experimental gap, and show no correlation with the KS gap. Where, then, is the band gap problem? This talk presents these results, discusses why the defect gap and the KS gap are distinct, implying that current understanding of what the ``band gap problem'' means--and how to ``fix'' it--need to be rethought. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Company, for the U.S. Department of Energy's NNSA under contract DE-AC04-94AL85000.

  5. Adversarial reasoning and resource allocation: the LG approach

    NASA Astrophysics Data System (ADS)

    Stilman, Boris; Yakhnis, Vladimir; Umanskiy, Oleg; Boyd, Ron

    2005-05-01

    Many existing automated tools purporting to model the intelligent enemy utilize a fixed battle plan for the enemy while using flexible decisions of human players for the friendly side. According to the Naval Studies Board, "It is an open secret and a point of distress ... that too much of the substantive content of such M&S has its origin in anecdote, ..., or a narrow construction tied to stereotypical current practices of 'doctrinally correct behavior.'" Clearly, such runs lack objectivity by being heavily skewed in favor of the friendly forces. Presently, the military branches employ a variety of game-based simulators and synthetic environments, with manual (i.e., user-based) decision-making, for training and other purposes. However, without an ability to automatically generate the best strategies, tactics, and COA, the games serve mostly to display the current situation rather than form a basis for automated decision-making and effective training. We solve the problem of adversarial reasoning as a gaming problem employing Linguistic Geometry (LG), a new type of game theory demonstrating significant increase in size in gaming problems solvable in real and near-real time. It appears to be a viable approach for solving such practical problems as mission planning and battle management. Essentially, LG may be structured into two layers: game construction and game solving. Game construction includes construction of a game called an LG hypergame based on a hierarchy of Abstract Board Games (ABG). Game solving includes resource allocation for constructing an advantageous initial game state and strategy generation to reach a desirable final game state in the course of the game.

  6. Biomechanical considerations on tooth-implant supported fixed partial dentures

    PubMed Central

    Calvani, Pasquale; Hirayama, Hiroshi

    2012-01-01

    This article discusses the connection of teeth to implants, in order to restore partial edentulism. The main problem arising from this connection is tooth intrusion, which can occur in up to 7.3% of the cases. The justification of this complication is being attempted through the perspective of biomechanics of the involved anatomical structures, that is, the periodontal ligament and the bone, as well as that of the teeth- and implant-supported fixed partial dentures. PMID:23255882

  7. A unified monolithic approach for multi-fluid flows and fluid-structure interaction using the Particle Finite Element Method with fixed mesh

    NASA Astrophysics Data System (ADS)

    Becker, P.; Idelsohn, S. R.; Oñate, E.

    2015-06-01

    This paper describes a strategy to solve multi-fluid and fluid-structure interaction (FSI) problems using Lagrangian particles combined with a fixed finite element (FE) mesh. Our approach is an extension of the fluid-only PFEM-2 (Idelsohn et al., Eng Comput 30(2):2-2, 2013; Idelsohn et al., J Numer Methods Fluids, 2014) which uses explicit integration over the streamlines to improve accuracy. As a result, the convective term does not appear in the set of equations solved on the fixed mesh. Enrichments in the pressure field are used to improve the description of the interface between phases.

  8. Analytical pricing of geometric Asian power options on an underlying driven by a mixed fractional Brownian motion

    NASA Astrophysics Data System (ADS)

    Zhang, Wei-Guo; Li, Zhe; Liu, Yong-Jun

    2018-01-01

    In this paper, we study the pricing problem of the continuously monitored fixed and floating strike geometric Asian power options in a mixed fractional Brownian motion environment. First, we derive both closed-form solutions and mixed fractional partial differential equations for fixed and floating strike geometric Asian power options based on delta-hedging strategy and partial differential equation method. Second, we present the lower and upper bounds of the prices of fixed and floating strike geometric Asian power options under the assumption that both risk-free interest rate and volatility are interval numbers. Finally, numerical studies are performed to illustrate the performance of our proposed pricing model.

  9. Seasonal variations in the diversity and abundance of diazotrophic communities across soils.

    PubMed

    Pereira e Silva, Michele C; Semenov, Alexander V; van Elsas, Jan Dirk; Salles, Joana Falcão

    2011-07-01

    The nitrogen (N)-fixing community is a key functional community in soil, as it replenishes the pool of biologically available N that is lost to the atmosphere via anaerobic ammonium oxidation and denitrification. We characterized the structure and dynamic changes in diazotrophic communities, based on the nifH gene, across eight different representative Dutch soils during one complete growing season, to evaluate the amplitude of the natural variation in abundance and diversity, and identify possible relationships with abiotic factors. Overall, our results indicate that soil type is the main factor influencing the N-fixing communities, which were more abundant and diverse in the clay soils (n=4) than in the sandy soils (n=4). On average, the amplitude of variation in community size as well as the range-weighted richness were also found to be higher in the clay soils. These results indicate that N-fixing communities associated with sandy and clay soil show a distinct amplitude of variation under field conditions, and suggest that the diazotrophic communities associated with clay soil might be more sensitive to fluctuations associated with the season and agricultural practices. Moreover, soil characteristics such as ammonium content, pH and texture most strongly correlated with the variations observed in the diversity, size and structure of N-fixing communities, whose relative importance was determined across a temporal and spatial scale. © 2011 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.

  10. Topological analysis of the motion of an ellipsoid on a smooth plane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivochkin, M Yu

    2008-06-30

    The problem of the motion of a dynamically and geometrically symmetric heavy ellipsoid on a smooth horizontal plane is investigated. The problem is integrable and can be considered a generalization of the problem of motion of a heavy rigid body with fixed point in the Lagrangian case. The Smale bifurcation diagrams are constructed. Surgeries of tori are investigated using methods developed by Fomenko and his students. Bibliography: 9 titles.

  11. Robust model selection and the statistical classification of languages

    NASA Astrophysics Data System (ADS)

    García, J. E.; González-López, V. A.; Viola, M. L. L.

    2012-10-01

    In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating a model which represent the main law for each language. Our findings agree with the linguistic conjecture, related to the rhythm of the languages included on our dataset.

  12. RT-PCR analysis of RNA extracted from Bouin-fixed and paraffin-embedded lymphoid tissues.

    PubMed

    Gloghini, Annunziata; Canal, Barbara; Klein, Ulf; Dal Maso, Luigino; Perin, Tiziana; Dalla-Favera, Riccardo; Carbone, Antonino

    2004-11-01

    In the present study, we have investigated whether RNA can be efficiently isolated from Bouin-fixed or formalin-fixed, paraffin-embedded lymphoid tissue specimens. To this aim, we applied a new and simple method that includes the combination of proteinase K digestion and column purification. By this method, we demonstrated that the amplification of long fragments could be accomplished after a pre-heating step before cDNA synthesis associated with the use of enzymes that work at high temperature. By means of PCR using different primers for two examined genes (glyceraldehyde-3-phosphate dehydrogenase [GAPDH]- and CD40), we amplified segments of cDNA obtained by reverse transcription of the isolated RNA extracted from Bouin-fixed or formalin-fixed paraffin-embedded tissues. Amplified fragments of the expected sizes were obtained for both genes tested indicating that this method is suitable for the isolation of high-quality RNA. To explore the possibility for giving accurate real time quantitative RT-PCR results, cDNA obtained from matched frozen, Bouin-fixed and formalin-fixed neoplastic samples (two diffuse large cell lymphomas, one plasmacytoma) was tested for the following target genes: CD40, Aquaporin-3, BLIMP1, IRF4, Syndecan-1. Delta threshold cycle (DeltaC(T)) values for Bouin-fixed and formalin-fixed paraffin-embedded tissues and their correlation with those for frozen samples showed an extremely high correlation (r > 0.90) for all of the tested genes. These results show that the method of RNA extraction we propose is suitable for giving accurate real time quantitative RT-PCR results.

  13. Demonstration of Multi- and Single-Reader Sample Size Program for Diagnostic Studies software.

    PubMed

    Hillis, Stephen L; Schartz, Kevin M

    2015-02-01

    The recently released software Multi- and Single-Reader Sample Size Sample Size Program for Diagnostic Studies , written by Kevin Schartz and Stephen Hillis, performs sample size computations for diagnostic reader-performance studies. The program computes the sample size needed to detect a specified difference in a reader performance measure between two modalities, when using the analysis methods initially proposed by Dorfman, Berbaum, and Metz (DBM) and Obuchowski and Rockette (OR), and later unified and improved by Hillis and colleagues. A commonly used reader performance measure is the area under the receiver-operating-characteristic curve. The program can be used with typical common reader-performance measures which can be estimated parametrically or nonparametrically. The program has an easy-to-use step-by-step intuitive interface that walks the user through the entry of the needed information. Features of the software include the following: (1) choice of several study designs; (2) choice of inputs obtained from either OR or DBM analyses; (3) choice of three different inference situations: both readers and cases random, readers fixed and cases random, and readers random and cases fixed; (4) choice of two types of hypotheses: equivalence or noninferiority; (6) choice of two output formats: power for specified case and reader sample sizes, or a listing of case-reader combinations that provide a specified power; (7) choice of single or multi-reader analyses; and (8) functionality in Windows, Mac OS, and Linux.

  14. Eigenvalue problems for Beltrami fields arising in a three-dimensional toroidal magnetohydrodynamic equilibrium problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hudson, S. R.; Hole, M. J.; Dewar, R. L.

    2007-05-15

    A generalized energy principle for finite-pressure, toroidal magnetohydrodynamic (MHD) equilibria in general three-dimensional configurations is proposed. The full set of ideal-MHD constraints is applied only on a discrete set of toroidal magnetic surfaces (invariant tori), which act as barriers against leakage of magnetic flux, helicity, and pressure through chaotic field-line transport. It is argued that a necessary condition for such invariant tori to exist is that they have fixed, irrational rotational transforms. In the toroidal domains bounded by these surfaces, full Taylor relaxation is assumed, thus leading to Beltrami fields {nabla}xB={lambda}B, where {lambda} is constant within each domain. Two distinctmore » eigenvalue problems for {lambda} arise in this formulation, depending on whether fluxes and helicity are fixed, or boundary rotational transforms. These are studied in cylindrical geometry and in a three-dimensional toroidal region of annular cross section. In the latter case, an application of a residue criterion is used to determine the threshold for connected chaos.« less

  15. On solving wave equations on fixed bounded intervals involving Robin boundary conditions with time-dependent coefficients

    NASA Astrophysics Data System (ADS)

    van Horssen, Wim T.; Wang, Yandong; Cao, Guohua

    2018-06-01

    In this paper, it is shown how characteristic coordinates, or equivalently how the well-known formula of d'Alembert, can be used to solve initial-boundary value problems for wave equations on fixed, bounded intervals involving Robin type of boundary conditions with time-dependent coefficients. A Robin boundary condition is a condition that specifies a linear combination of the dependent variable and its first order space-derivative on a boundary of the interval. Analytical methods, such as the method of separation of variables (SOV) or the Laplace transform method, are not applicable to those types of problems. The obtained analytical results by applying the proposed method, are in complete agreement with those obtained by using the numerical, finite difference method. For problems with time-independent coefficients in the Robin boundary condition(s), the results of the proposed method also completely agree with those as for instance obtained by the method of separation of variables, or by the finite difference method.

  16. Improved Results for Route Planning in Stochastic Transportation Networks

    NASA Technical Reports Server (NTRS)

    Boyan, Justin; Mitzenmacher, Michael

    2000-01-01

    In the bus network problem, the goal is to generate a plan for getting from point X to point Y within a city using buses in the smallest expected time. Because bus arrival times are not determined by a fixed schedule but instead may be random. the problem requires more than standard shortest path techniques. In recent work, Datar and Ranade provide algorithms in the case where bus arrivals are assumed to be independent and exponentially distributed. We offer solutions to two important generalizations of the problem, answering open questions posed by Datar and Ranade. First, we provide a polynomial time algorithm for a much wider class of arrival distributions, namely those with increasing failure rate. This class includes not only exponential distributions but also uniform, normal, and gamma distributions. Second, in the case where bus arrival times are independent and geometric discrete random variable,. we provide an algorithm for transportation networks of buses and trains, where trains run according to a fixed schedule.

  17. An hp symplectic pseudospectral method for nonlinear optimal control

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Xinwei; Li, Mingwu; Chen, Biaosong

    2017-01-01

    An adaptive symplectic pseudospectral method based on the dual variational principle is proposed and is successfully applied to solving nonlinear optimal control problems in this paper. The proposed method satisfies the first order necessary conditions of continuous optimal control problems, also the symplectic property of the original continuous Hamiltonian system is preserved. The original optimal control problem is transferred into a set of nonlinear equations which can be solved easily by Newton-Raphson iterations, and the Jacobian matrix is found to be sparse and symmetric. The proposed method, on one hand, exhibits exponent convergence rates when the number of collocation points are increasing with the fixed number of sub-intervals; on the other hand, exhibits linear convergence rates when the number of sub-intervals is increasing with the fixed number of collocation points. Furthermore, combining with the hp method based on the residual error of dynamic constraints, the proposed method can achieve given precisions in a few iterations. Five examples highlight the high precision and high computational efficiency of the proposed method.

  18. Pace's Maxims for Homegrown Library Projects. Coming Full Circle

    ERIC Educational Resources Information Center

    Pace, Andrew K.

    2005-01-01

    This article discusses six maxims by which to run library automation. The following maxims are discussed: (1) Solve only known problems; (2) Avoid changing data to fix display problems; (3) Aut viam inveniam aut faciam; (4) If you cannot make it yourself, buy something; (5) Kill the alligator closest to the boat; and (6) Just because yours is…

  19. Fixing America's College Attainment Problems: It's about More than Affordability. Critical Considerations for Any New Federal-State Partnership

    ERIC Educational Resources Information Center

    Santos, Jose Luis; Haycock, Kati

    2016-01-01

    In response to mounting concerns about the cost of college, lawmakers have proposed major new partnerships between the federal government and states to tackle college affordability. The Education Trust maintains that any new federal-state proposal aimed at making college more affordable must also simultaneously address completion problems by…

  20. Symmetry of the Adiabatic Condition in the Piston Problem

    ERIC Educational Resources Information Center

    Anacleto, Joaquim; Ferreira, J. M.

    2011-01-01

    This study addresses a controversial issue in the adiabatic piston problem, namely that of the piston being adiabatic when it is fixed but no longer so when it can move freely. It is shown that this apparent contradiction arises from the usual definition of adiabatic condition. The issue is addressed here by requiring the adiabatic condition to be…

  1. That Was the Crisis: What Is to Be Done to Fix Irish Education Now?

    ERIC Educational Resources Information Center

    O'Mahony, Fintan

    2015-01-01

    In 2008 Ireland found itself in the forefront of the Eurozone crisis. The impact on education has been profound. In this article it is suggested that Ireland's education problems long pre-date the economic crisis and current "reforms" are about long-term neoliberal restructuring, not short-term solutions to immediate economic problems.…

  2. 50 CFR 660.230 - Fixed gear fishery-management measures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... limit, size limit, scientific sorting designation, quota, harvest guideline, ACL or ACT or OY, if the... designation, quota, harvest guideline, ACL or ACT or OY applied.” The States of Washington, Oregon, and...

  3. 50 CFR 660.230 - Fixed gear fishery-management measures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... limit, size limit, scientific sorting designation, quota, harvest guideline, ACL or ACT or OY, if the... designation, quota, harvest guideline, ACL or ACT or OY applied.” The States of Washington, Oregon, and...

  4. Minimax confidence intervals in geomagnetism

    NASA Technical Reports Server (NTRS)

    Stark, Philip B.

    1992-01-01

    The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.

  5. Synthesis of a controller for stabilizing the motion of a rigid body about a fixed point

    NASA Astrophysics Data System (ADS)

    Zabolotnov, Yu. M.; Lobanov, A. A.

    2017-05-01

    A method for the approximate design of an optimal controller for stabilizing the motion of a rigid body about a fixed point is considered. It is assumed that rigid body motion is nearly the motion in the classical Lagrange case. The method is based on the common use of the Bellman dynamic programming principle and the averagingmethod. The latter is used to solve theHamilton-Jacobi-Bellman equation approximately, which permits synthesizing the controller. The proposed method for controller design can be used in many problems close to the problem of motion of the Lagrange top (the motion of a rigid body in the atmosphere, the motion of a rigid body fastened to a cable in deployment of the orbital cable system, etc.).

  6. Development and operations of the astrophysics data system

    NASA Technical Reports Server (NTRS)

    Murray, Stephen S.; Oliversen, Ronald (Technical Monitor)

    2005-01-01

    Abstract service - Continued regular updates of abstracts in the databases, both at SA0 and at all mirror sites. - Modified loading scripts to accommodate changes in data format (PhyS) - Discussed data deliveries with providers to clear up problems with format or other errors (EGU) - Continued inclusion of large numbers of historical literature volumes and physics conference volumes xeroxed from the library. - Performed systematic fixes on some data sets in the database to account for changes in article numbering (AGU journals) - Implemented linking of ADS bibliographic records with multimedia files - Debugged and fixed obscure connection problems with the ADS Korean mirror site which were preventing successful updates of the data holdings. - Wrote procedure to parse citation data and characterize an ADS record based on its citation ratios within each database.

  7. Separability of electrostatic and hydrodynamic forces in particle electrophoresis

    NASA Astrophysics Data System (ADS)

    Todd, Brian A.; Cohen, Joel A.

    2011-09-01

    By use of optical tweezers we explicitly measure the electrostatic and hydrodynamic forces that determine the electrophoretic mobility of a charged colloidal particle. We test the ansatz of O'Brien and White [J. Chem. Soc. Faraday IIJCFTBS0300-923810.1039/f29787401607 74, 1607 (1978)] that the electrostatically and hydrodynamically coupled electrophoresis problem is separable into two simpler problems: (1) a particle held fixed in an applied electric field with no flow field and (2) a particle held fixed in a flow field with no applied electric field. For a system in the Helmholtz-Smoluchowski and Debye-Hückel regimes, we find that the electrostatic and hydrodynamic forces measured independently accurately predict the electrophoretic mobility within our measurement precision of 7%; the O'Brien and White ansatz holds under the conditions of our experiment.

  8. Representing perturbed dynamics in biological network models

    NASA Astrophysics Data System (ADS)

    Stoll, Gautier; Rougemont, Jacques; Naef, Felix

    2007-07-01

    We study the dynamics of gene activities in relatively small size biological networks (up to a few tens of nodes), e.g., the activities of cell-cycle proteins during the mitotic cell-cycle progression. Using the framework of deterministic discrete dynamical models, we characterize the dynamical modifications in response to structural perturbations in the network connectivities. In particular, we focus on how perturbations affect the set of fixed points and sizes of the basins of attraction. Our approach uses two analytical measures: the basin entropy H and the perturbation size Δ , a quantity that reflects the distance between the set of fixed points of the perturbed network and that of the unperturbed network. Applying our approach to the yeast-cell-cycle network introduced by Li [Proc. Natl. Acad. Sci. U.S.A. 101, 4781 (2004)] provides a low-dimensional and informative fingerprint of network behavior under large classes of perturbations. We identify interactions that are crucial for proper network function, and also pinpoint functionally redundant network connections. Selected perturbations exemplify the breadth of dynamical responses in this cell-cycle model.

  9. Collector Size or Range Independence of SNR in Fixed-Focus Remote Raman Spectrometry.

    PubMed

    Hirschfeld, T

    1974-07-01

    When sensitivity allows, remote Raman spectrometers can be operated at a fixed focus with purely electronic (easily multiplexable) range gating. To keep the background small, the system etendue must be minimized. For a maximum range larger than the hyperfocal one, this is done by focusing the system at roughly twice the minimum range at which etendue matching is still required. Under these conditions the etendue varies as the fourth power of the collector diameter, causing the background shot noise to vary as its square. As the signal also varies with the same power, and background noise is usually limiting in this type instrument, the SNR becomes independent of the collector size. Below this minimum etendue-matched range, the transmission at the limiting aperture grows with the square of the range, canceling the inverse square loss of signal with range. The SNR is thus range independent below the minimum etendue matched range and collector size independent above it, with the location of transition being determined by the system etendue and collector diameter. The range of validity of these outrageousstatements is discussed.

  10. Twyman effect mechanics in grinding and microgrinding.

    PubMed

    Lambropoulos, J C; Xu, S; Fang, T; Golini, D

    1996-10-01

    In the Twyman effect (1905), when one side of a thin plate with both sides polished is ground, the plate bends: The ground side becomes convex and is in a state of compressive residual stress, described in terms of force per unit length (Newtons per meter) induced by grinding, the stress (Newtons per square meter) induced by grinding, and the depth of the compressive layer (micrometers). We describe and correlate experiments on optical glasses from the literature in conditions of loose abrasive grinding (lapping at fixed nominal pressure, with abrasives 4-400 μm in size) and deterministic microgrinding experiments (at a fixed infeed rate) conducted at the Center for Optics Manufacturing with bound diamond abrasive tools (with a diamond size of 3-40 μm, embedded in metallic bond) and loose abrasive microgrinding (abrasives of less than 3 μm in size). In brittle grinding conditions, the grinding force and the depth of the compressive layer correlate well with glass mechanical properties describing the fracture process, such as indentation crack size. The maximum surface residual compressive stress decreases, and the depth of the compressive layer increases with increasing abrasive size. In lapping conditions the depth of the abrasive grain penetration into the glass surface scales with the surface roughness, and both are determined primarily by glass hardness and secondarily by Young's modulus for various abrasive sizes and coolants. In the limit of small abrasive size (ductile-mode grinding), the maximum surface compressive stress achieved is near the yield stress of the glass, in agreement with finite-element simulations of indentation in elastic-plastic solids.

  11. State estimation for networked control systems using fixed data rates

    NASA Astrophysics Data System (ADS)

    Liu, Qing-Quan; Jin, Fang

    2017-07-01

    This paper investigates state estimation for linear time-invariant systems where sensors and controllers are geographically separated and connected via a bandwidth-limited and errorless communication channel with the fixed data rate. All plant states are quantised, coded and converted together into a codeword in our quantisation and coding scheme. We present necessary and sufficient conditions on the fixed data rate for observability of such systems, and further develop the data-rate theorem. It is shown in our results that there exists a quantisation and coding scheme to ensure observability of the system if the fixed data rate is larger than the lower bound given, which is less conservative than the one in the literature. Furthermore, we also examine the role that the disturbances have on the state estimation problem in the case with data-rate limitations. Illustrative examples are given to demonstrate the effectiveness of the proposed method.

  12. How to Assess the Existence of Competing Strategies in Cognitive Tasks: A Primer on the Fixed-Point Property

    PubMed Central

    van Maanen, Leendert; de Jong, Ritske; van Rijn, Hedderik

    2014-01-01

    When multiple strategies can be used to solve a type of problem, the observed response time distributions are often mixtures of multiple underlying base distributions each representing one of these strategies. For the case of two possible strategies, the observed response time distributions obey the fixed-point property. That is, there exists one reaction time that has the same probability of being observed irrespective of the actual mixture proportion of each strategy. In this paper we discuss how to compute this fixed-point, and how to statistically assess the probability that indeed the observed response times are generated by two competing strategies. Accompanying this paper is a free R package that can be used to compute and test the presence or absence of the fixed-point property in response time data, allowing for easy to use tests of strategic behavior. PMID:25170893

  13. Fixed Point Learning Based Intelligent Traffic Control System

    NASA Astrophysics Data System (ADS)

    Zongyao, Wang; Cong, Sui; Cheng, Shao

    2017-10-01

    Fixed point learning has become an important tool to analyse large scale distributed system such as urban traffic network. This paper presents a fixed point learning based intelligence traffic network control system. The system applies convergence property of fixed point theorem to optimize the traffic flow density. The intelligence traffic control system achieves maximum road resources usage by averaging traffic flow density among the traffic network. The intelligence traffic network control system is built based on decentralized structure and intelligence cooperation. No central control is needed to manage the system. The proposed system is simple, effective and feasible for practical use. The performance of the system is tested via theoretical proof and simulations. The results demonstrate that the system can effectively solve the traffic congestion problem and increase the vehicles average speed. It also proves that the system is flexible, reliable and feasible for practical use.

  14. Positive solutions of fractional integral equations by the technique of measure of noncompactness.

    PubMed

    Nashine, Hemant Kumar; Arab, Reza; Agarwal, Ravi P; De la Sen, Manuel

    2017-01-01

    In the present study, we work on the problem of the existence of positive solutions of fractional integral equations by means of measures of noncompactness in association with Darbo's fixed point theorem. To achieve the goal, we first establish new fixed point theorems using a new contractive condition of the measure of noncompactness in Banach spaces. By doing this we generalize Darbo's fixed point theorem along with some recent results of (Aghajani et al. (J. Comput. Appl. Math. 260:67-77, 2014)), (Aghajani et al. (Bull. Belg. Math. Soc. Simon Stevin 20(2):345-358, 2013)), (Arab (Mediterr. J. Math. 13(2):759-773, 2016)), (Banaś et al. (Dyn. Syst. Appl. 18:251-264, 2009)), and (Samadi et al. (Abstr. Appl. Anal. 2014:852324, 2014)). We also derive corresponding coupled fixed point results. Finally, we give an illustrative example to verify the effectiveness and applicability of our results.

  15. Non-Intrusive Techniques of Inspections During the Pre-Launch Phase of Space Vehicle

    NASA Technical Reports Server (NTRS)

    Thirumalainambi, Rejkumar; Bardina, Jorge E.

    2005-01-01

    This paper addresses a method of non-intrusive local inspection of surface and sub-surface conditions, interfaces, laminations and seals in both space vehicle and ground operations with an integrated suite of imaging sensors during pre-launch operations. It employs an advanced Raman spectrophotometer with additional spectrophotometers and lidar mounted on a flying robot to constantly monitor the space hardware as well as inner surface of the vehicle and ground operations hardware. This paper addresses a team of micro flying robots with necessary sensors and photometers to monitor the entire space vehicle internally and externally. The micro flying robots can reach altitude with least amount of energy, where astronauts have difficulty in reaching and monitoring the materials and subsurface faults. The micro flying robot has an embedded fault detection system which acts as an advisory system and in many cases micro flying robots act as a Supervisor to fix the problems. As missions expand to a sustainable presence in the Moon, and extend for durations longer than one year in lunar outpost, the effectiveness of the instrumentation and hardware has to be revolutionized if NASA is to meet high levels of mission safety, reliability, and overall success. The micro flying robot uses contra-rotating propellers powered by an ultra-thin, ultrasonic motor with currently the world's highest power weight ratio, and is balanced in mid-air by means of the world's first stabilizing mechanism using a linear actuator. The essence of micromechatronics has been brought together in high-density mounting technology to minimize the size and weight. The robot can take suitable payloads of photometers, embedded chips for image analysis and micro pumps for sealing cracks or fixing other material problems. This paper also highlights advantages that this type of non-intrusive techniques offer over costly and monolithic traditional techniques.

  16. Nonintrusive techniques of inspections during the pre-launch phase of space vehicle

    NASA Astrophysics Data System (ADS)

    Thirumalainambi, Rajkumar; Bardina, Jorge E.; Miyazawa, Osamu

    2005-05-01

    As missions expand to a sustainable presence in the Moon, and extend for durations longer than one year in lunar outpost, the effectiveness of the instrumentation and hardware has to be revolutionized if NASA is to meet high levels of mission safety, reliability, and overall success. This paper addresses a method of non-intrusive local inspection of surface and sub-surface conditions, interfaces, laminations and seals in both space vehicle and ground operations with an integrated suite of imaging sensors during pre-launch operations. It employs an advanced Raman spectrometer with additional spectrometers and lidar mounted on a flying robot to constantly monitor the space hardware as well as inner surface of the vehicle and ground operations hardware. A team of micro flying robots with necessary sensors and photometers is required to internally and externally monitor the entire space vehicle. The micro flying robots should reach an altitude with least amount of energy, where astronauts have difficulty in reaching and monitoring the materials and subsurface faults. The micro flying robots have an embedded fault detection system which acts as an advisory system and in many cases micro flying robots act as a `Supervisor' to fix the problems. The micro flying robot uses contra-rotating propellers powered by an ultra-thin, ultrasonic motor with currently the world's highest power weight ratio, and is balanced in mid-air by means of the world's first stabilizing mechanism using a linear actuator. The essence of micromechatronics has been brought together in high-density mounting technology to minimize the size and weight. Each robot can take suitable payloads of photometers, embedded chips for image analysis and micro pumps for sealing cracks or fixing other material problems. This paper also highlights advantages that this type of non-intrusive techniques offer over costly and monolithic traditional techniques.

  17. Use of synchrotron tomography to image naturalistic anatomy in insects

    NASA Astrophysics Data System (ADS)

    Socha, John J.; De Carlo, Francesco

    2008-08-01

    Understanding the morphology of anatomical structures is a cornerstone of biology. For small animals, classical methods such as histology have provided a wealth of data, but such techniques can be problematic due to destruction of the sample. More importantly, fixation and physical slicing can cause deformation of anatomy, a critical limitation when precise three-dimensional data are required. Modern techniques such as confocal microscopy, MRI, and tabletop x-ray microCT provide effective non-invasive methods, but each of these tools each has limitations including sample size constraints, resolution limits, and difficulty visualizing soft tissue. Our research group at the Advanced Photon Source (Argonne National Laboratory) studies physiological processes in insects, focusing on the dynamics of breathing and feeding. To determine the size, shape, and relative location of internal anatomy in insects, we use synchrotron microtomography at the beamline 2-BM to image structures including tracheal tubes, muscles, and gut. Because obtaining naturalistic, undeformed anatomical information is a key component of our studies, we have developed methods to image fresh and non-fixed whole animals and tissues. Although motion artifacts remain a problem, we have successfully imaged multiple species including beetles, ants, fruit flies, and butterflies. Here we discuss advances in biological imaging and highlight key findings in insect morphology.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diaz, J. I.; Henry, J.; Ramos, A. M.

    We prove the approximate controllability of several nonlinear parabolic boundary-value problems by means of two different methods: the first one can be called a Cancellation method and the second one uses the Kakutani fixed-point theorem.

  19. Effects of bite size and duration of oral processing on retro-nasal aroma release - features contributing to meal termination.

    PubMed

    Ruijschop, Rianne M A J; Zijlstra, Nicolien; Boelrijk, Alexandra E M; Dijkstra, Annereinou; Burgering, Maurits J M; Graaf, Cees de; Westerterp-Plantenga, Margriet S

    2011-01-01

    The brain response to a retro-nasally sensed food odour signals the perception of food and it is suggested to be related to satiation. It is hypothesised that consuming food either in multiple small bite sizes or with a longer durations of oral processing may evoke substantial oral processing per gram consumed and an increase in transit time in the oral cavity. This is expected to result in a higher cumulative retro-nasal aroma stimulation, which in turn may lead to increased feelings of satiation and decreased food intake. Using real-time atmospheric pressure chemical ionisation-MS, in vivo retro-nasal aroma release was assessed for twenty-one young, healthy and normal-weight subjects consuming dark chocolate-flavoured custard. Subjects were exposed to both free or fixed bite size (5 and 15 g) and durations of oral processing before swallowing (3 and 9 s) in a cross-over design. For a fixed amount of dark chocolate-flavoured custard, consumption in multiple small bite sizes resulted in a significantly higher cumulative extent of retro-nasal aroma release per gram consumed compared with a smaller amount of large bite sizes. In addition, a longer duration of oral processing tended to result in a higher cumulative extent of retro-nasal aroma release per gram consumed compared with a short duration of oral processing. An interaction effect of bite size and duration of oral processing was not observed. In conclusion, decreasing bite size or increasing duration of oral processing led to a higher cumulative retro-nasal aroma stimulation per gram consumed. Hence, adapting bite size or duration of oral processing indicates that meal termination can be accelerated by increasing the extent of retro-nasal aroma release and, subsequently, the satiation.

  20. Simulating galaxies in the reionization era with FIRE-2: morphologies and sizes

    NASA Astrophysics Data System (ADS)

    Ma, Xiangcheng; Hopkins, Philip F.; Boylan-Kolchin, Michael; Faucher-Giguère, Claude-André; Quataert, Eliot; Feldmann, Robert; Garrison-Kimmel, Shea; Hayward, Christopher C.; Kereš, Dušan; Wetzel, Andrew

    2018-06-01

    We study the morphologies and sizes of galaxies at z ≥ 5 using high-resolution cosmological zoom-in simulations from the Feedback In Realistic Environments project. The galaxies show a variety of morphologies, from compact to clumpy to irregular. The simulated galaxies have more extended morphologies and larger sizes when measured using rest-frame optical B-band light than rest-frame UV light; sizes measured from stellar mass surface density are even larger. The UV morphologies are usually dominated by several small, bright young stellar clumps that are not always associated with significant stellar mass. The B-band light traces stellar mass better than the UV, but it can also be biased by the bright clumps. At all redshifts, galaxy size correlates with stellar mass/luminosity with large scatter. The half-light radii range from 0.01 to 0.2 arcsec (0.05-1 kpc physical) at fixed magnitude. At z ≥ 5, the size of galaxies at fixed stellar mass/luminosity evolves as (1 + z)-m, with m ˜ 1-2. For galaxies less massive than M* ˜ 108 M⊙, the ratio of the half-mass radius to the halo virial radius is ˜ 10 per cent and does not evolve significantly at z = 5-10; this ratio is typically 1-5 per cent for more massive galaxies. A galaxy's `observed' size decreases dramatically at shallower surface brightness limits. This effect may account for the extremely small sizes of z ≥ 5 galaxies measured in the Hubble Frontier Fields. We provide predictions for the cumulative light distribution as a function of surface brightness for typical galaxies at z = 6.

Top