Sample records for minimal generating set

  1. Spatial chaos of Wang tiles with two symbols

    NASA Astrophysics Data System (ADS)

    Chen, Jin-Yu; Chen, Yu-Jie; Hu, Wen-Guei; Lin, Song-Sun

    2016-02-01

    This investigation completely classifies the spatial chaos problem in plane edge coloring (Wang tiles) with two symbols. For a set of Wang tiles B , spatial chaos occurs when the spatial entropy h ( B ) is positive. B is called a minimal cycle generator if P ( B ) ≠ 0̸ and P ( B ' ) = 0̸ whenever B ' ⫋ B , where P ( B ) is the set of all periodic patterns on ℤ2 generated by B . Given a set of Wang tiles B , write B = C 1 ∪ C 2 ∪ ⋯ ∪ C k ∪ N , where Cj, 1 ≤ j ≤ k, are minimal cycle generators and B contains no minimal cycle generator except those contained in C1∪C2∪⋯∪Ck. Then, the positivity of spatial entropy h ( B ) is completely determined by C1∪C2∪⋯∪Ck. Furthermore, there are 39 equivalence classes of marginal positive-entropy sets of Wang tiles and 18 equivalence classes of saturated zero-entropy sets of Wang tiles. For a set of Wang tiles B , h ( B ) is positive if and only if B contains a MPE set, and h ( B ) is zero if and only if B is a subset of a SZE set.

  2. Rule extraction from minimal neural networks for credit card screening.

    PubMed

    Setiono, Rudy; Baesens, Bart; Mues, Christophe

    2011-08-01

    While feedforward neural networks have been widely accepted as effective tools for solving classification problems, the issue of finding the best network architecture remains unresolved, particularly so in real-world problem settings. We address this issue in the context of credit card screening, where it is important to not only find a neural network with good predictive performance but also one that facilitates a clear explanation of how it produces its predictions. We show that minimal neural networks with as few as one hidden unit provide good predictive accuracy, while having the added advantage of making it easier to generate concise and comprehensible classification rules for the user. To further reduce model size, a novel approach is suggested in which network connections from the input units to this hidden unit are removed by a very straightaway pruning procedure. In terms of predictive accuracy, both the minimized neural networks and the rule sets generated from them are shown to compare favorably with other neural network based classifiers. The rules generated from the minimized neural networks are concise and thus easier to validate in a real-life setting.

  3. Search for Minimal and Semi-Minimal Rule Sets in Incremental Learning of Context-Free and Definite Clause Grammars

    NASA Astrophysics Data System (ADS)

    Imada, Keita; Nakamura, Katsuhiko

    This paper describes recent improvements to Synapse system for incremental learning of general context-free grammars (CFGs) and definite clause grammars (DCGs) from positive and negative sample strings. An important feature of our approach is incremental learning, which is realized by a rule generation mechanism called “bridging” based on bottom-up parsing for positive samples and the search for rule sets. The sizes of rule sets and the computation time depend on the search strategies. In addition to the global search for synthesizing minimal rule sets and serial search, another method for synthesizing semi-optimum rule sets, we incorporate beam search to the system for synthesizing semi-minimal rule sets. The paper shows several experimental results on learning CFGs and DCGs, and we analyze the sizes of rule sets and the computation time.

  4. Prediction and generation of binary Markov processes: Can a finite-state fox catch a Markov mouse?

    NASA Astrophysics Data System (ADS)

    Ruebeck, Joshua B.; James, Ryan G.; Mahoney, John R.; Crutchfield, James P.

    2018-01-01

    Understanding the generative mechanism of a natural system is a vital component of the scientific method. Here, we investigate one of the fundamental steps toward this goal by presenting the minimal generator of an arbitrary binary Markov process. This is a class of processes whose predictive model is well known. Surprisingly, the generative model requires three distinct topologies for different regions of parameter space. We show that a previously proposed generator for a particular set of binary Markov processes is, in fact, not minimal. Our results shed the first quantitative light on the relative (minimal) costs of prediction and generation. We find, for instance, that the difference between prediction and generation is maximized when the process is approximately independently, identically distributed.

  5. Denjoy minimal sets and Birkhoff periodic orbits for non-exact monotone twist maps

    NASA Astrophysics Data System (ADS)

    Qin, Wen-Xin; Wang, Ya-Nan

    2018-06-01

    A non-exact monotone twist map φbarF is a composition of an exact monotone twist map φ bar with a generating function H and a vertical translation VF with VF ((x , y)) = (x , y - F). We show in this paper that for each ω ∈ R, there exists a critical value Fd (ω) ≥ 0 depending on H and ω such that for 0 ≤ F ≤Fd (ω), the non-exact twist map φbarF has an invariant Denjoy minimal set with irrational rotation number ω lying on a Lipschitz graph, or Birkhoff (p , q)-periodic orbits for rational ω = p / q. Like the Aubry-Mather theory, we also construct heteroclinic orbits connecting Birkhoff periodic orbits, and show that quasi-periodic orbits in these Denjoy minimal sets can be approximated by periodic orbits. In particular, we demonstrate that at the critical value F =Fd (ω), the Denjoy minimal set is not uniformly hyperbolic and can be approximated by smooth curves.

  6. Optimized Temporal Monitors for SystemC

    NASA Technical Reports Server (NTRS)

    Tabakov, Deian; Rozier, Kristin Y.; Vardi, Moshe Y.

    2012-01-01

    SystemC is a modeling language built as an extension of C++. Its growing popularity and the increasing complexity of designs have motivated research efforts aimed at the verification of SystemC models using assertion-based verification (ABV), where the designer asserts properties that capture the design intent in a formal language such as PSL or SVA. The model then can be verified against the properties using runtime or formal verification techniques. In this paper we focus on automated generation of runtime monitors from temporal properties. Our focus is on minimizing runtime overhead, rather than monitor size or monitor-generation time. We identify four issues in monitor generation: state minimization, alphabet representation, alphabet minimization, and monitor encoding. We conduct extensive experimentation and identify a combination of settings that offers the best performance in terms of runtime overhead.

  7. Cut set-based risk and reliability analysis for arbitrarily interconnected networks

    DOEpatents

    Wyss, Gregory D.

    2000-01-01

    Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.

  8. Quantum Dynamics with Short-Time Trajectories and Minimal Adaptive Basis Sets.

    PubMed

    Saller, Maximilian A C; Habershon, Scott

    2017-07-11

    Methods for solving the time-dependent Schrödinger equation via basis set expansion of the wave function can generally be categorized as having either static (time-independent) or dynamic (time-dependent) basis functions. We have recently introduced an alternative simulation approach which represents a middle road between these two extremes, employing dynamic (classical-like) trajectories to create a static basis set of Gaussian wavepackets in regions of phase-space relevant to future propagation of the wave function [J. Chem. Theory Comput., 11, 8 (2015)]. Here, we propose and test a modification of our methodology which aims to reduce the size of basis sets generated in our original scheme. In particular, we employ short-time classical trajectories to continuously generate new basis functions for short-time quantum propagation of the wave function; to avoid the continued growth of the basis set describing the time-dependent wave function, we employ Matching Pursuit to periodically minimize the number of basis functions required to accurately describe the wave function. Overall, this approach generates a basis set which is adapted to evolution of the wave function while also being as small as possible. In applications to challenging benchmark problems, namely a 4-dimensional model of photoexcited pyrazine and three different double-well tunnelling problems, we find that our new scheme enables accurate wave function propagation with basis sets which are around an order-of-magnitude smaller than our original trajectory-guided basis set methodology, highlighting the benefits of adaptive strategies for wave function propagation.

  9. Measurement of temperature induced in bone during drilling in minimally invasive foot surgery.

    PubMed

    Omar, Noor Azzizah; McKinley, John C

    2018-02-19

    There has been growing interest in minimally invasive foot surgery due to the benefits it delivers in post-operative outcomes in comparison to conventional open methods of surgery. One of the major factors determining the protocol in minimally invasive surgery is to prevent iatrogenic thermal osteonecrosis. The aim of the study is to look at various drilling parameters in a minimally invasive surgery setting that would reduce the risk of iatrogenic thermal osteonecrosis. Sixteen fresh-frozen tarsal bones and two metatarsal bones were retrieved from three individuals and drilled using various settings. The parameters considered were drilling speed, drill diameter, and inter-individual cortical variability. Temperature measurements of heat generated at the drilling site were collected using two methods; thermocouple probe and infrared thermography. The data obtained were quantitatively analysed. There was a significant difference in the temperatures generated with different drilling speeds (p<0.05). However, there was no significant difference in temperatures recorded between the bones of different individuals and in bones drilled using different drill diameters. Thermocouple showed significantly more sensitive tool in measuring temperature compared to infrared thermography. Drilling at an optimal speed significantly reduced the risk of iatrogenic thermal osteonecrosis by maintaining temperature below the threshold level. Although different drilling diameters did not produce significant differences in temperature generation, there is a need for further study on the mechanical impact of using different drill diameters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Generating effective project scheduling heuristics by abstraction and reconstitution

    NASA Technical Reports Server (NTRS)

    Janakiraman, Bhaskar; Prieditis, Armand

    1992-01-01

    A project scheduling problem consists of a finite set of jobs, each with fixed integer duration, requiring one or more resources such as personnel or equipment, and each subject to a set of precedence relations, which specify allowable job orderings, and a set of mutual exclusion relations, which specify jobs that cannot overlap. No job can be interrupted once started. The objective is to minimize project duration. This objective arises in nearly every large construction project--from software to hardware to buildings. Because such project scheduling problems are NP-hard, they are typically solved by branch-and-bound algorithms. In these algorithms, lower-bound duration estimates (admissible heuristics) are used to improve efficiency. One way to obtain an admissible heuristic is to remove (abstract) all resources and mutual exclusion constraints and then obtain the minimal project duration for the abstracted problem; this minimal duration is the admissible heuristic. Although such abstracted problems can be solved efficiently, they yield inaccurate admissible heuristics precisely because those constraints that are central to solving the original problem are abstracted. This paper describes a method to reconstitute the abstracted constraints back into the solution to the abstracted problem while maintaining efficiency, thereby generating better admissible heuristics. Our results suggest that reconstitution can make good admissible heuristics even better.

  11. Construction of a minimal genome as a chassis for synthetic biology.

    PubMed

    Sung, Bong Hyun; Choe, Donghui; Kim, Sun Chang; Cho, Byung-Kwan

    2016-11-30

    Microbial diversity and complexity pose challenges in understanding the voluminous genetic information produced from whole-genome sequences, bioinformatics and high-throughput '-omics' research. These challenges can be overcome by a core blueprint of a genome drawn with a minimal gene set, which is essential for life. Systems biology and large-scale gene inactivation studies have estimated the number of essential genes to be ∼300-500 in many microbial genomes. On the basis of the essential gene set information, minimal-genome strains have been generated using sophisticated genome engineering techniques, such as genome reduction and chemical genome synthesis. Current size-reduced genomes are not perfect minimal genomes, but chemically synthesized genomes have just been constructed. Some minimal genomes provide various desirable functions for bioindustry, such as improved genome stability, increased transformation efficacy and improved production of biomaterials. The minimal genome as a chassis genome for synthetic biology can be used to construct custom-designed genomes for various practical and industrial applications. © 2016 The Author(s). published by Portland Press Limited on behalf of the Biochemical Society.

  12. Surface relief structures for multiple beam LO generation

    NASA Technical Reports Server (NTRS)

    Veldkamp, W. B.

    1980-01-01

    Linear and binary holograms for use in heterodyne detection with 10.6 micron imaging arrays are described. The devices match the amplitude and phase of the local oscillator to the received signal and thus maximize the system signal to noise ratio and resolution and minimize heat generation on the focal plane. In both the linear and binary approaches, the holographic surface-relief pattern is coded to generate a set of local oscillator beams when the relief pattern is illuminated by a single planewave. Each beam of this set has the same amplitude shape distribution as, and is collinear with, each single element wavefront illuminating array.

  13. The Heisenberg-Weyl algebra on the circle and a related quantum mechanical model for hindered rotation.

    PubMed

    Kouri, Donald J; Markovich, Thomas; Maxwell, Nicholas; Bodmann, Bernhard G

    2009-07-02

    We discuss a periodic variant of the Heisenberg-Weyl algebra, associated with the group of translations and modulations on the circle. Our study of uncertainty minimizers leads to a periodic version of canonical coherent states. Unlike the canonical, Cartesian case, there are states for which the uncertainty product associated with the generators of the algebra vanishes. Next, we explore the supersymmetric (SUSY) quantum mechanical setting for the uncertainty-minimizing states and interpret them as leading to a family of "hindered rotors". Finally, we present a standard quantum mechanical treatment of one of these hindered rotor systems, including numerically generated eigenstates and energies.

  14. rasbhari: Optimizing Spaced Seeds for Database Searching, Read Mapping and Alignment-Free Sequence Comparison.

    PubMed

    Hahn, Lars; Leimeister, Chris-André; Ounit, Rachid; Lonardi, Stefano; Morgenstern, Burkhard

    2016-10-01

    Many algorithms for sequence analysis rely on word matching or word statistics. Often, these approaches can be improved if binary patterns representing match and don't-care positions are used as a filter, such that only those positions of words are considered that correspond to the match positions of the patterns. The performance of these approaches, however, depends on the underlying patterns. Herein, we show that the overlap complexity of a pattern set that was introduced by Ilie and Ilie is closely related to the variance of the number of matches between two evolutionarily related sequences with respect to this pattern set. We propose a modified hill-climbing algorithm to optimize pattern sets for database searching, read mapping and alignment-free sequence comparison of nucleic-acid sequences; our implementation of this algorithm is called rasbhari. Depending on the application at hand, rasbhari can either minimize the overlap complexity of pattern sets, maximize their sensitivity in database searching or minimize the variance of the number of pattern-based matches in alignment-free sequence comparison. We show that, for database searching, rasbhari generates pattern sets with slightly higher sensitivity than existing approaches. In our Spaced Words approach to alignment-free sequence comparison, pattern sets calculated with rasbhari led to more accurate estimates of phylogenetic distances than the randomly generated pattern sets that we previously used. Finally, we used rasbhari to generate patterns for short read classification with CLARK-S. Here too, the sensitivity of the results could be improved, compared to the default patterns of the program. We integrated rasbhari into Spaced Words; the source code of rasbhari is freely available at http://rasbhari.gobics.de/.

  15. An ``Openable,'' High-Strength Gradient Set for Orthopedic MRI

    NASA Astrophysics Data System (ADS)

    Crozier, Stuart; Roffmann, Wolfgang U.; Luescher, Kurt; Snape-Jenkinson, Christopher; Forbes, Lawrence K.; Doddrell, David M.

    1999-07-01

    A novel three-axis gradient set and RF resonator for orthopedic MRI has been designed and constructed. The set is openable and may be wrapped around injured joints. The design methodology used was the minimization of magnetic field spherical harmonics by simulated annealing. Splitting of the longitudinal coil presents the major design challenge to a fully openable gradient set and in order to efficiently design such coils, we have developed a new fast algorithm for determining the magnetic field spherical harmonics generated by an arc of multiturn wire. The algorithm allows a realistic impression of the effect of split longitudinal designs. A prototype set was constructed based on the new designs and tested in a 2-T clinical research system. The set generated 12 mT/m/A with a linear region of 12 cm and a switching time of 100 μs, conforming closely with theoretical predictions. Preliminary images from the set are presented.

  16. Spatial Optimization of Future Urban Development with Regards to Climate Risk and Sustainability Objectives.

    PubMed

    Caparros-Midwood, Daniel; Barr, Stuart; Dawson, Richard

    2017-11-01

    Future development in cities needs to manage increasing populations, climate-related risks, and sustainable development objectives such as reducing greenhouse gas emissions. Planners therefore face a challenge of multidimensional, spatial optimization in order to balance potential tradeoffs and maximize synergies between risks and other objectives. To address this, a spatial optimization framework has been developed. This uses a spatially implemented genetic algorithm to generate a set of Pareto-optimal results that provide planners with the best set of trade-off spatial plans for six risk and sustainability objectives: (i) minimize heat risks, (ii) minimize flooding risks, (iii) minimize transport travel costs to minimize associated emissions, (iv) maximize brownfield development, (v) minimize urban sprawl, and (vi) prevent development of greenspace. The framework is applied to Greater London (U.K.) and shown to generate spatial development strategies that are optimal for specific objectives and differ significantly from the existing development strategies. In addition, the analysis reveals tradeoffs between different risks as well as between risk and sustainability objectives. While increases in heat or flood risk can be avoided, there are no strategies that do not increase at least one of these. Tradeoffs between risk and other sustainability objectives can be more severe, for example, minimizing heat risk is only possible if future development is allowed to sprawl significantly. The results highlight the importance of spatial structure in modulating risks and other sustainability objectives. However, not all planning objectives are suited to quantified optimization and so the results should form part of an evidence base to improve the delivery of risk and sustainability management in future urban development. © 2017 The Authors Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.

  17. Real time selective harmonic minimization for multilevel inverters using genetic algorithm and artifical neural network angle generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filho, Faete J; Tolbert, Leon M; Ozpineci, Burak

    2012-01-01

    The work developed here proposes a methodology for calculating switching angles for varying DC sources in a multilevel cascaded H-bridges converter. In this approach the required fundamental is achieved, the lower harmonics are minimized, and the system can be implemented in real time with low memory requirements. Genetic algorithm (GA) is the stochastic search method to find the solution for the set of equations where the input voltages are the known variables and the switching angles are the unknown variables. With the dataset generated by GA, an artificial neural network (ANN) is trained to store the solutions without excessive memorymore » storage requirements. This trained ANN then senses the voltage of each cell and produces the switching angles in order to regulate the fundamental at 120 V and eliminate or minimize the low order harmonics while operating in real time.« less

  18. Optimized diffusion gradient orientation schemes for corrupted clinical DTI data sets.

    PubMed

    Dubois, J; Poupon, C; Lethimonnier, F; Le Bihan, D

    2006-08-01

    A method is proposed for generating schemes of diffusion gradient orientations which allow the diffusion tensor to be reconstructed from partial data sets in clinical DT-MRI, should the acquisition be corrupted or terminated before completion because of patient motion. A general energy-minimization electrostatic model was developed in which the interactions between orientations are weighted according to their temporal order during acquisition. In this report, two corruption scenarios were specifically considered for generating relatively uniform schemes of 18 and 60 orientations, with useful subsets of 6 and 15 orientations. The sets and subsets were compared to conventional sets through their energy, condition number and rotational invariance. Schemes of 18 orientations were tested on a volunteer. The optimized sets were similar to uniform sets in terms of energy, condition number and rotational invariance, whether the complete set or only a subset was considered. Diffusion maps obtained in vivo were close to those for uniform sets whatever the acquisition time was. This was not the case with conventional schemes, whose subset uniformity was insufficient. With the proposed approach, sets of orientations responding to several corruption scenarios can be generated, which is potentially useful for imaging uncooperative patients or infants.

  19. Elliptic surface grid generation on minimal and parmetrized surfaces

    NASA Technical Reports Server (NTRS)

    Spekreijse, S. P.; Nijhuis, G. H.; Boerstoel, J. W.

    1995-01-01

    An elliptic grid generation method is presented which generates excellent boundary conforming grids in domains in 2D physical space. The method is based on the composition of an algebraic and elliptic transformation. The composite mapping obeys the familiar Poisson grid generation system with control functions specified by the algebraic transformation. New expressions are given for the control functions. Grid orthogonality at the boundary is achieved by modification of the algebraic transformation. It is shown that grid generation on a minimal surface in 3D physical space is in fact equivalent to grid generation in a domain in 2D physical space. A second elliptic grid generation method is presented which generates excellent boundary conforming grids on smooth surfaces. It is assumed that the surfaces are parametrized and that the grid only depends on the shape of the surface and is independent of the parametrization. Concerning surface modeling, it is shown that bicubic Hermite interpolation is an excellent method to generate a smooth surface which is passing through a given discrete set of control points. In contrast to bicubic spline interpolation, there is extra freedom to model the tangent and twist vectors such that spurious oscillations are prevented.

  20. Assessing and minimizing contamination in time of flight based validation data

    NASA Astrophysics Data System (ADS)

    Lennox, Kristin P.; Rosenfield, Paul; Blair, Brenton; Kaplan, Alan; Ruz, Jaime; Glenn, Andrew; Wurtz, Ronald

    2017-10-01

    Time of flight experiments are the gold standard method for generating labeled training and testing data for the neutron/gamma pulse shape discrimination problem. As the popularity of supervised classification methods increases in this field, there will also be increasing reliance on time of flight data for algorithm development and evaluation. However, time of flight experiments are subject to various sources of contamination that lead to neutron and gamma pulses being mislabeled. Such labeling errors have a detrimental effect on classification algorithm training and testing, and should therefore be minimized. This paper presents a method for identifying minimally contaminated data sets from time of flight experiments and estimating the residual contamination rate. This method leverages statistical models describing neutron and gamma travel time distributions and is easily implemented using existing statistical software. The method produces a set of optimal intervals that balance the trade-off between interval size and nuisance particle contamination, and its use is demonstrated on a time of flight data set for Cf-252. The particular properties of the optimal intervals for the demonstration data are explored in detail.

  1. Minimizing forced outage risk in generator bidding

    NASA Astrophysics Data System (ADS)

    Das, Dibyendu

    Competition in power markets has exposed the participating companies to physical and financial uncertainties. Generator companies bid to supply power in a day-ahead market. Once their bids are accepted by the ISO they are bound to supply power. A random outage after acceptance of bids forces a generator to buy power from the expensive real-time hourly spot market and sell to the ISO at the set day-ahead market clearing price, incurring losses. A risk management technique is developed to assess this financial risk associated with forced outages of generators and then minimize it. This work presents a risk assessment module which measures the financial risk of generators bidding in an open market for different bidding scenarios. The day-ahead power market auction is modeled using a Unit Commitment algorithm and a combination of Normal and Cauchy distributions generate the real time hourly spot market. Risk profiles are derived and VaRs are calculated at 98 percent confidence level as a measure of financial risk. Risk Profiles and VaRs help the generators to analyze the forced outage risk and different factors affecting it. The VaRs and the estimated total earning for different bidding scenarios are used to develop a risk minimization module. This module will develop a bidding strategy of the generator company such that its estimated total earning is maximized keeping the VaR below a tolerable limit. This general framework of a risk management technique for the generating companies bidding in competitive day-ahead market can also help them in decisions related to building new generators.

  2. Reverse engineering time discrete finite dynamical systems: a feasible undertaking?

    PubMed

    Delgado-Eckert, Edgar

    2009-01-01

    With the advent of high-throughput profiling methods, interest in reverse engineering the structure and dynamics of biochemical networks is high. Recently an algorithm for reverse engineering of biochemical networks was developed by Laubenbacher and Stigler. It is a top-down approach using time discrete dynamical systems. One of its key steps includes the choice of a term order, a technicality imposed by the use of Gröbner-bases calculations. The aim of this paper is to identify minimal requirements on data sets to be used with this algorithm and to characterize optimal data sets. We found minimal requirements on a data set based on how many terms the functions to be reverse engineered display. Furthermore, we identified optimal data sets, which we characterized using a geometric property called "general position". Moreover, we developed a constructive method to generate optimal data sets, provided a codimensional condition is fulfilled. In addition, we present a generalization of their algorithm that does not depend on the choice of a term order. For this method we derived a formula for the probability of finding the correct model, provided the data set used is optimal. We analyzed the asymptotic behavior of the probability formula for a growing number of variables n (i.e. interacting chemicals). Unfortunately, this formula converges to zero as fast as , where and . Therefore, even if an optimal data set is used and the restrictions in using term orders are overcome, the reverse engineering problem remains unfeasible, unless prodigious amounts of data are available. Such large data sets are experimentally impossible to generate with today's technologies.

  3. Advanced scatter search approach and its application in a sequencing problem of mixed-model assembly lines in a case company

    NASA Astrophysics Data System (ADS)

    Liu, Qiong; Wang, Wen-xi; Zhu, Ke-ren; Zhang, Chao-yong; Rao, Yun-qing

    2014-11-01

    Mixed-model assembly line sequencing is significant in reducing the production time and overall cost of production. To improve production efficiency, a mathematical model aiming simultaneously to minimize overtime, idle time and total set-up costs is developed. To obtain high-quality and stable solutions, an advanced scatter search approach is proposed. In the proposed algorithm, a new diversification generation method based on a genetic algorithm is presented to generate a set of potentially diverse and high-quality initial solutions. Many methods, including reference set update, subset generation, solution combination and improvement methods, are designed to maintain the diversification of populations and to obtain high-quality ideal solutions. The proposed model and algorithm are applied and validated in a case company. The results indicate that the proposed advanced scatter search approach is significant for mixed-model assembly line sequencing in this company.

  4. Internal combustion engine report: Spark ignited ICE GenSet optimization and novel concept development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, J.; Blarigan, P. Van

    1998-08-01

    In this manuscript the authors report on two projects each of which the goal is to produce cost effective hydrogen utilization technologies. These projects are: (1) the development of an electrical generation system using a conventional four-stroke spark-ignited internal combustion engine generator combination (SI-GenSet) optimized for maximum efficiency and minimum emissions, and (2) the development of a novel internal combustion engine concept. The SI-GenSet will be optimized to run on either hydrogen or hydrogen-blends. The novel concept seeks to develop an engine that optimizes the Otto cycle in a free piston configuration while minimizing all emissions. To this end themore » authors are developing a rapid combustion homogeneous charge compression ignition (HCCI) engine using a linear alternator for both power take-off and engine control. Targeted applications include stationary electrical power generation, stationary shaft power generation, hybrid vehicles, and nearly any other application now being accomplished with internal combustion engines.« less

  5. OPTIM: Computer program to generate a vertical profile which minimizes aircraft fuel burn or direct operating cost. User's guide

    NASA Technical Reports Server (NTRS)

    1983-01-01

    A profile of altitude, airspeed, and flight path angle as a function of range between a given set of origin and destination points for particular models of transport aircraft provided by NASA is generated. Inputs to the program include the vertical wind profile, the aircraft takeoff weight, the costs of time and fuel, certain constraint parameters and control flags. The profile can be near optimum in the sense of minimizing: (1) fuel, (2) time, or (3) a combination of fuel and time (direct operating cost (DOC)). The user can also, as an option, specify the length of time the flight is to span. The theory behind the technical details of this program is also presented.

  6. Fuzzy automata and pattern matching

    NASA Technical Reports Server (NTRS)

    Setzer, C. B.; Warsi, N. A.

    1986-01-01

    A wide-ranging search for articles and books concerned with fuzzy automata and syntactic pattern recognition is presented. A number of survey articles on image processing and feature detection were included. Hough's algorithm is presented to illustrate the way in which knowledge about an image can be used to interpret the details of the image. It was found that in hand generated pictures, the algorithm worked well on following the straight lines, but had great difficulty turning corners. An algorithm was developed which produces a minimal finite automaton recognizing a given finite set of strings. One difficulty of the construction is that, in some cases, this minimal automaton is not unique for a given set of strings and a given maximum length. This algorithm compares favorably with other inference algorithms. More importantly, the algorithm produces an automaton with a rigorously described relationship to the original set of strings that does not depend on the algorithm itself.

  7. Vacuum stability in the early universe and the backreaction of classical gravity.

    PubMed

    Markkanen, Tommi

    2018-03-06

    In the case of a metastable electroweak vacuum, the quantum corrected effective potential plays a crucial role in the potential instability of the standard model. In the early universe, in particular during inflation and reheating, this instability can be triggered leading to catastrophic vacuum decay. We discuss how the large space-time curvature of the early universe can be incorporated in the calculation and in many cases significantly modify the flat space prediction. The two key new elements are the unavoidable generation of the non-minimal coupling between the Higgs field and the scalar curvature of gravity and a curvature induced contribution to the running of the constants. For the minimal set up of the standard model and a decoupled inflation sector we show how a metastable vacuum can lead to very tight bounds for the non-minimal coupling. We also discuss a novel and very much related dark matter generation mechanism.This article is part of the Theo Murphy meeting issue 'Higgs cosmology'. © 2018 The Author(s).

  8. Vacuum stability in the early universe and the backreaction of classical gravity

    NASA Astrophysics Data System (ADS)

    Markkanen, Tommi

    2018-01-01

    In the case of a metastable electroweak vacuum, the quantum corrected effective potential plays a crucial role in the potential instability of the standard model. In the early universe, in particular during inflation and reheating, this instability can be triggered leading to catastrophic vacuum decay. We discuss how the large space-time curvature of the early universe can be incorporated in the calculation and in many cases significantly modify the flat space prediction. The two key new elements are the unavoidable generation of the non-minimal coupling between the Higgs field and the scalar curvature of gravity and a curvature induced contribution to the running of the constants. For the minimal set up of the standard model and a decoupled inflation sector we show how a metastable vacuum can lead to very tight bounds for the non-minimal coupling. We also discuss a novel and very much related dark matter generation mechanism. This article is part of the Theo Murphy meeting issue `Higgs cosmology'.

  9. Minimizing Expected Maximum Risk from Cyber-Attacks with Probabilistic Attack Success

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhuiyan, Tanveer H.; Nandi, Apurba; Medal, Hugh

    The goal of our work is to enhance network security by generating partial cut-sets, which are a subset of edges that remove paths from initially vulnerable nodes (initial security conditions) to goal nodes (critical assets), on an attack graph given costs for cutting an edge and a limited overall budget.

  10. Resolving Task Rule Incongruence during Task Switching by Competitor Rule Suppression

    ERIC Educational Resources Information Center

    Meiran, Nachshon; Hsieh, Shulan; Dimov, Eduard

    2010-01-01

    Task switching requires maintaining readiness to execute any task of a given set of tasks. However, when tasks switch, the readiness to execute the now-irrelevant task generates interference, as seen in the task rule incongruence effect. Overcoming such interference requires fine-tuned inhibition that impairs task readiness only minimally. In an…

  11. Summary of the searches for squarks and gluinos using √s = 8 TeV pp collisions with the ATLAS experiment at the LHC

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2015-10-08

    A summary is presented of ATLAS searches for gluinos and first- and second-generation squarks in final states containing jets and missing transverse momentum, with or without leptons or b-jets, in the √s = 8 TeV data set collected at the Large Hadron Collider in 2012. This paper reports the results of new interpretations and statistical combinations of previously published analyses, as well as a new analysis. Since no significant excess of events over the Standard Model expectation is observed, the data are used to set limits in a variety of models. In all the considered simplified models that assume R-paritymore » conservation, the limit on the gluino mass exceeds 1150 GeV at 95% confidence level, for an LSP mass smaller than 100 GeV. Moreover, exclusion limits are set for left-handed squarks in a phenomenological MSSM model, a minimal Supergravity/Constrained MSSM model, R-parity-violation scenarios, a minimal gauge-mediated supersymmetry breaking model, a natural gauge mediation model, a non-universal Higgs mass model with gaugino mediation and a minimal model of universal extra dimensions.« less

  12. An efficient graph theory based method to identify every minimal reaction set in a metabolic network

    PubMed Central

    2014-01-01

    Background Development of cells with minimal metabolic functionality is gaining importance due to their efficiency in producing chemicals and fuels. Existing computational methods to identify minimal reaction sets in metabolic networks are computationally expensive. Further, they identify only one of the several possible minimal reaction sets. Results In this paper, we propose an efficient graph theory based recursive optimization approach to identify all minimal reaction sets. Graph theoretical insights offer systematic methods to not only reduce the number of variables in math programming and increase its computational efficiency, but also provide efficient ways to find multiple optimal solutions. The efficacy of the proposed approach is demonstrated using case studies from Escherichia coli and Saccharomyces cerevisiae. In case study 1, the proposed method identified three minimal reaction sets each containing 38 reactions in Escherichia coli central metabolic network with 77 reactions. Analysis of these three minimal reaction sets revealed that one of them is more suitable for developing minimal metabolism cell compared to other two due to practically achievable internal flux distribution. In case study 2, the proposed method identified 256 minimal reaction sets from the Saccharomyces cerevisiae genome scale metabolic network with 620 reactions. The proposed method required only 4.5 hours to identify all the 256 minimal reaction sets and has shown a significant reduction (approximately 80%) in the solution time when compared to the existing methods for finding minimal reaction set. Conclusions Identification of all minimal reactions sets in metabolic networks is essential since different minimal reaction sets have different properties that effect the bioprocess development. The proposed method correctly identified all minimal reaction sets in a both the case studies. The proposed method is computationally efficient compared to other methods for finding minimal reaction sets and useful to employ with genome-scale metabolic networks. PMID:24594118

  13. Maxwell Strata and Cut Locus in the Sub-Riemannian Problem on the Engel Group

    NASA Astrophysics Data System (ADS)

    Ardentov, Andrei A.; Sachkov, Yuri L.

    2017-12-01

    We consider the nilpotent left-invariant sub-Riemannian structure on the Engel group. This structure gives a fundamental local approximation of a generic rank 2 sub-Riemannian structure on a 4-manifold near a generic point (in particular, of the kinematic models of a car with a trailer). On the other hand, this is the simplest sub-Riemannian structure of step three. We describe the global structure of the cut locus (the set of points where geodesics lose their global optimality), the Maxwell set (the set of points that admit more than one minimizer), and the intersection of the cut locus with the caustic (the set of conjugate points along all geodesics). The group of symmetries of the cut locus is described: it is generated by a one-parameter group of dilations R+ and a discrete group of reflections Z2 × Z2 × Z2. The cut locus admits a stratification with 6 three-dimensional strata, 12 two-dimensional strata, and 2 one-dimensional strata. Three-dimensional strata of the cut locus are Maxwell strata of multiplicity 2 (for each point there are 2 minimizers). Two-dimensional strata of the cut locus consist of conjugate points. Finally, one-dimensional strata are Maxwell strata of infinite multiplicity, they consist of conjugate points as well. Projections of sub-Riemannian geodesics to the 2-dimensional plane of the distribution are Euler elasticae. For each point of the cut locus, we describe the Euler elasticae corresponding to minimizers coming to this point. Finally, we describe the structure of the optimal synthesis, i. e., the set of minimizers for each terminal point in the Engel group.

  14. Intrusion detection using rough set classification.

    PubMed

    Zhang, Lian-hua; Zhang, Guan-hua; Zhang, Jie; Bai, Ying-cai

    2004-09-01

    Recently machine learning-based intrusion detection approaches have been subjected to extensive researches because they can detect both misuse and anomaly. In this paper, rough set classification (RSC), a modern learning algorithm, is used to rank the features extracted for detecting intrusions and generate intrusion detection models. Feature ranking is a very critical step when building the model. RSC performs feature ranking before generating rules, and converts the feature ranking to minimal hitting set problem addressed by using genetic algorithm (GA). This is done in classical approaches using Support Vector Machine (SVM) by executing many iterations, each of which removes one useless feature. Compared with those methods, our method can avoid many iterations. In addition, a hybrid genetic algorithm is proposed to increase the convergence speed and decrease the training time of RSC. The models generated by RSC take the form of "IF-THEN" rules, which have the advantage of explication. Tests and comparison of RSC with SVM on DARPA benchmark data showed that for Probe and DoS attacks both RSC and SVM yielded highly accurate results (greater than 99% accuracy on testing set).

  15. Minimal non-abelian supersymmetric Twin Higgs

    DOE PAGES

    Badziak, Marcin; Harigaya, Keisuke

    2017-10-17

    We propose a minimal supersymmetric Twin Higgs model that can accommodate tuning of the electroweak scale for heavy stops better than 10% with high mediation scales of supersymmetry breaking. A crucial ingredient of this model is a new SU(2) X gauge symmetry which provides a D-term potential that generates a large SU(4) invariant coupling for the Higgs sector and only small set of particles charged under SU(2) X , which allows the model to be perturbative around the Planck scale. The new gauge interaction drives the top yukawa coupling small at higher energy scales, which also reduces the tuning.

  16. Reduction of shock induced noise in imperfectly expanded supersonic jets using convex optimization

    NASA Astrophysics Data System (ADS)

    Adhikari, Sam

    2007-11-01

    Imperfectly expanded jets generate screech noise. The imbalance between the backpressure and the exit pressure of the imperfectly expanded jets produce shock cells and expansion or compression waves from the nozzle. The instability waves and the shock cells interact to generate the screech sound. The mathematical model consists of cylindrical coordinate based full Navier-Stokes equations and large-eddy-simulation turbulence modeling. Analytical and computational analysis of the three-dimensional helical effects provide a model that relates several parameters with shock cell patterns, screech frequency and distribution of shock generation locations. Convex optimization techniques minimize the shock cell patterns and the instability waves. The objective functions are (convex) quadratic and the constraint functions are affine. In the quadratic optimization programs, minimization of the quadratic functions over a set of polyhedrons provides the optimal result. Various industry standard methods like regression analysis, distance between polyhedra, bounding variance, Markowitz optimization, and second order cone programming is used for Quadratic Optimization.

  17. A level set-based topology optimization method for simultaneous design of elastic structure and coupled acoustic cavity using a two-phase material model

    NASA Astrophysics Data System (ADS)

    Noguchi, Yuki; Yamamoto, Takashi; Yamada, Takayuki; Izui, Kazuhiro; Nishiwaki, Shinji

    2017-09-01

    This papers proposes a level set-based topology optimization method for the simultaneous design of acoustic and structural material distributions. In this study, we develop a two-phase material model that is a mixture of an elastic material and acoustic medium, to represent an elastic structure and an acoustic cavity by controlling a volume fraction parameter. In the proposed model, boundary conditions at the two-phase material boundaries are satisfied naturally, avoiding the need to express these boundaries explicitly. We formulate a topology optimization problem to minimize the sound pressure level using this two-phase material model and a level set-based method that obtains topologies free from grayscales. The topological derivative of the objective functional is approximately derived using a variational approach and the adjoint variable method and is utilized to update the level set function via a time evolutionary reaction-diffusion equation. Several numerical examples present optimal acoustic and structural topologies that minimize the sound pressure generated from a vibrating elastic structure.

  18. Design optimization of a fuzzy distributed generation (DG) system with multiple renewable energy sources

    NASA Astrophysics Data System (ADS)

    Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.

    2012-09-01

    The global rise in energy demands brings major obstacles to many energy organizations in providing adequate energy supply. Hence, many techniques to generate cost effective, reliable and environmentally friendly alternative energy source are being explored. One such method is the integration of photovoltaic cells, wind turbine generators and fuel-based generators, included with storage batteries. This sort of power systems are known as distributed generation (DG) power system. However, the application of DG power systems raise certain issues such as cost effectiveness, environmental impact and reliability. The modelling as well as the optimization of this DG power system was successfully performed in the previous work using Particle Swarm Optimization (PSO). The central idea of that work was to minimize cost, minimize emissions and maximize reliability (multi-objective (MO) setting) with respect to the power balance and design requirements. In this work, we introduce a fuzzy model that takes into account the uncertain nature of certain variables in the DG system which are dependent on the weather conditions (such as; the insolation and wind speed profiles). The MO optimization in a fuzzy environment was performed by applying the Hopfield Recurrent Neural Network (HNN). Analysis on the optimized results was then carried out.

  19. OxMaR: open source free software for online minimization and randomization for clinical trials.

    PubMed

    O'Callaghan, Christopher A

    2014-01-01

    Minimization is a valuable method for allocating participants between the control and experimental arms of clinical studies. The use of minimization reduces differences that might arise by chance between the study arms in the distribution of patient characteristics such as gender, ethnicity and age. However, unlike randomization, minimization requires real time assessment of each new participant with respect to the preceding distribution of relevant participant characteristics within the different arms of the study. For multi-site studies, this necessitates centralized computational analysis that is shared between all study locations. Unfortunately, there is no suitable freely available open source or free software that can be used for this purpose. OxMaR was developed to enable researchers in any location to use minimization for patient allocation and to access the minimization algorithm using any device that can connect to the internet such as a desktop computer, tablet or mobile phone. The software is complete in itself and requires no special packages or libraries to be installed. It is simple to set up and run over the internet using online facilities which are very low cost or even free to the user. Importantly, it provides real time information on allocation to the study lead or administrator and generates real time distributed backups with each allocation. OxMaR can readily be modified and customised and can also be used for standard randomization. It has been extensively tested and has been used successfully in a low budget multi-centre study. Hitherto, the logistical difficulties involved in minimization have precluded its use in many small studies and this software should allow more widespread use of minimization which should lead to studies with better matched control and experimental arms. OxMaR should be particularly valuable in low resource settings.

  20. Behavioral and computational aspects of language and its acquisition

    NASA Astrophysics Data System (ADS)

    Edelman, Shimon; Waterfall, Heidi

    2007-12-01

    One of the greatest challenges facing the cognitive sciences is to explain what it means to know a language, and how the knowledge of language is acquired. The dominant approach to this challenge within linguistics has been to seek an efficient characterization of the wealth of documented structural properties of language in terms of a compact generative grammar-ideally, the minimal necessary set of innate, universal, exception-less, highly abstract rules that jointly generate all and only the observed phenomena and are common to all human languages. We review developmental, behavioral, and computational evidence that seems to favor an alternative view of language, according to which linguistic structures are generated by a large, open set of constructions of varying degrees of abstraction and complexity, which embody both form and meaning and are acquired through socially situated experience in a given language community, by probabilistic learning algorithms that resemble those at work in other cognitive modalities.

  1. ConfocalGN: A minimalistic confocal image generator

    NASA Astrophysics Data System (ADS)

    Dmitrieff, Serge; Nédélec, François

    Validating image analysis pipelines and training machine-learning segmentation algorithms require images with known features. Synthetic images can be used for this purpose, with the advantage that large reference sets can be produced easily. It is however essential to obtain images that are as realistic as possible in terms of noise and resolution, which is challenging in the field of microscopy. We describe ConfocalGN, a user-friendly software that can generate synthetic microscopy stacks from a ground truth (i.e. the observed object) specified as a 3D bitmap or a list of fluorophore coordinates. This software can analyze a real microscope image stack to set the noise parameters and directly generate new images of the object with noise characteristics similar to that of the sample image. With a minimal input from the user and a modular architecture, ConfocalGN is easily integrated with existing image analysis solutions.

  2. Quantifying and minimizing entropy generation in AMTEC cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendricks, T.J.; Huang, C.

    1997-12-31

    Entropy generation in an AMTEC cell represents inherent power loss to the AMTEC cell. Minimizing cell entropy generation directly maximizes cell power generation and efficiency. An internal project is on-going at AMPS to identify, quantify and minimize entropy generation mechanisms within an AMTEC cell, with the goal of determining cost-effective design approaches for maximizing AMTEC cell power generation. Various entropy generation mechanisms have been identified and quantified. The project has investigated several cell design techniques in a solar-driven AMTEC system to minimize cell entropy generation and produce maximum power cell designs. In many cases, various sources of entropy generation aremore » interrelated such that minimizing entropy generation requires cell and system design optimization. Some of the tradeoffs between various entropy generation mechanisms are quantified and explained and their implications on cell design are discussed. The relationship between AMTEC cell power and efficiency and entropy generation is presented and discussed.« less

  3. A new digitized reverse correction method for hypoid gears based on a one-dimensional probe

    NASA Astrophysics Data System (ADS)

    Li, Tianxing; Li, Jubo; Deng, Xiaozhong; Yang, Jianjun; Li, Genggeng; Ma, Wensuo

    2017-12-01

    In order to improve the tooth surface geometric accuracy and transmission quality of hypoid gears, a new digitized reverse correction method is proposed based on the measurement data from a one-dimensional probe. The minimization of tooth surface geometrical deviations is realized from the perspective of mathematical analysis and reverse engineering. Combining the analysis of complex tooth surface generation principles and the measurement mechanism of one-dimensional probes, the mathematical relationship between the theoretical designed tooth surface, the actual machined tooth surface and the deviation tooth surface is established, the mapping relation between machine-tool settings and tooth surface deviations is derived, and the essential connection between the accurate calculation of tooth surface deviations and the reverse correction method of machine-tool settings is revealed. Furthermore, a reverse correction model of machine-tool settings is built, a reverse correction strategy is planned, and the minimization of tooth surface deviations is achieved by means of the method of numerical iterative reverse solution. On this basis, a digitized reverse correction system for hypoid gears is developed by the organic combination of numerical control generation, accurate measurement, computer numerical processing, and digitized correction. Finally, the correctness and practicability of the digitized reverse correction method are proved through a reverse correction experiment. The experimental results show that the tooth surface geometric deviations meet the engineering requirements after two trial cuts and one correction.

  4. Application of a hybrid generation/utility assessment heuristic to a class of scheduling problems

    NASA Technical Reports Server (NTRS)

    Heyward, Ann O.

    1989-01-01

    A two-stage heuristic solution approach for a class of multiobjective, n-job, 1-machine scheduling problems is described. Minimization of job-to-job interference for n jobs is sought. The first stage generates alternative schedule sequences by interchanging pairs of schedule elements. The set of alternative sequences can represent nodes of a decision tree; each node is reached via decision to interchange job elements. The second stage selects the parent node for the next generation of alternative sequences through automated paired comparison of objective performance for all current nodes. An application of the heuristic approach to communications satellite systems planning is presented.

  5. Interface Control Document for the EMPACT Module that Estimates Electric Power Transmission System Response to EMP-Caused Damage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Werley, Kenneth Alan; Mccown, Andrew William

    The EPREP code is designed to evaluate the effects of an Electro-Magnetic Pulse (EMP) on the electric power transmission system. The EPREP code embodies an umbrella framework that allows a user to set up analysis conditions and to examine analysis results. The code links to three major physics/engineering modules. The first module describes the EM wave in space and time. The second module evaluates the damage caused by the wave on specific electric power (EP) transmission system components. The third module evaluates the consequence of the damaged network on its (reduced) ability to provide electric power to meet demand. Thismore » third module is the focus of the present paper. The EMPACT code serves as the third module. The EMPACT name denotes EMP effects on Alternating Current Transmission systems. The EMPACT algorithms compute electric power transmission network flow solutions under severely damaged network conditions. Initial solutions are often characterized by unacceptible network conditions including line overloads and bad voltages. The EMPACT code contains algorithms to adjust optimally network parameters to eliminate network problems while minimizing outages. System adjustments include automatically adjusting control equipment (generator V control, variable transformers, and variable shunts), as well as non-automatic control of generator power settings and minimal load shedding. The goal is to evaluate the minimal loss of customer load under equilibrium (steady-state) conditions during peak demand.« less

  6. Minimally flavored colored scalar in and the mass matrices constraints

    NASA Astrophysics Data System (ADS)

    Doršner, Ilja; Fajfer, Svjetlana; Košnik, Nejc; Nišandžić, Ivan

    2013-11-01

    The presence of a colored scalar that is a weak doublet with fractional electric charges of | Q| = 2 /3 and | Q| = 5 /3 with mass below 1 TeV can provide an explanation of the observed branching ratios in decays. The required combination of scalar and tensor operators in the effective Hamiltonian for is generated through the t-channel exchange. We focus on a scenario with a minimal set of Yukawa couplings that can address a semitauonic puzzle and show that its resolution puts a nontrivial bound on the product of the scalar couplings to and . We also derive additional constraints posed by , muon magnetic moment, lepton flavor violating decays μ → eγ, τ → μγ, τ → eγ, and τ electric dipole moment. The minimal set of Yukawa couplings is not only compatible with the mass generation in an SU(5) unification framework, a natural environment for colored scalars, but specifies all matter mixing parameters except for one angle in the up-type quark sector. We accordingly spell out predictions for the proton decay signatures through gauge boson exchange and show that p → π0 e + is suppressed with respect to and even p → K 0 e + in some parts of available parameter space. Impact of the colored scalar embedding in 45-dimensional representation of SU(5) on low-energy phenomenology is also presented. Finally, we make predictions for rare top and charm decays where presence of this scalar can be tested independently.

  7. Application of multi-objective optimization to pooled experiments of next generation sequencing for detection of rare mutations.

    PubMed

    Zilinskas, Julius; Lančinskas, Algirdas; Guarracino, Mario Rosario

    2014-01-01

    In this paper we propose some mathematical models to plan a Next Generation Sequencing experiment to detect rare mutations in pools of patients. A mathematical optimization problem is formulated for optimal pooling, with respect to minimization of the experiment cost. Then, two different strategies to replicate patients in pools are proposed, which have the advantage to decrease the overall costs. Finally, a multi-objective optimization formulation is proposed, where the trade-off between the probability to detect a mutation and overall costs is taken into account. The proposed solutions are devised in pursuance of the following advantages: (i) the solution guarantees mutations are detectable in the experimental setting, and (ii) the cost of the NGS experiment and its biological validation using Sanger sequencing is minimized. Simulations show replicating pools can decrease overall experimental cost, thus making pooling an interesting option.

  8. The Abbreviation of Personality, or how to Measure 200 Personality Scales with 200 Items

    PubMed Central

    Yarkoni, Tal

    2010-01-01

    Personality researchers have recently advocated the use of very short personality inventories in order to minimize administration time. However, few such inventories are currently available. Here I introduce an automated method that can be used to abbreviate virtually any personality inventory with minimal effort. After validating the method against existing measures in Studies 1 and 2, a new 181-item inventory is generated in Study 3 that accurately recaptures scores on 8 different broadband inventories comprising 203 distinct scales. Collectively, the results validate a powerful new way to improve the efficiency of personality measurement in research settings. PMID:20419061

  9. Limitation of Biofuel Production in Europe from the Forest Market

    NASA Astrophysics Data System (ADS)

    Leduc, Sylvain; Wetterlund, Elisabeth; Dotzauer, Erik; Kindermann, Georg

    2013-04-01

    The European Union has set a 10% target for the share of biofuel in the transportation sector to be met by 2020. To reach this target, second generation biofuel is expected to replace 3 to 5% of the transport fossil fuel consumption. But the competition on the feedstock is an issue and makes the planning for the second generation biofuel plant a challenge. Moreover, no commercial second generation biofuel production plant is under operation, but if reaching commercial status, this type of production plants are expected to become very large. In order to minimize the tranportation costs and to takle the competetion for the feedstock against the existing woody based industries, the geographical location of biofuel production plants becomes an issue. This study investigates the potential of second generation biofuel economically feasible in Europe by 2020 in regards with the competition for the feedsstock with the existing woody biomass based industries (CHP, pulp and paper mills, sawmills...). To assess the biofuel potential in Europe, a techno-economic, geographically explicit model, BeWhere, is used. It determines the optimal locations of bio-energy production plants by minimizing the costs and CO2 emissions of the entire supply chain. The existing woody based industries have to first meet their wood demand, and if the amount of wood that remains is suficiant, new bio-energy production plants if any can be set up. Preliminary results show that CHP plants are preferably chosen over biofuel production plants. Strong biofuel policy support is needed in order to consequently increase the biofuel production in Europe. The carbon tax influences the emission reduction to a higher degree than the biofuel support. And the potential of second generation biofuel would at most reach 3% of the European transport fuel if the wood demand does not increase from 2010.

  10. Four simple rules that are sufficient to generate the mammalian blastocyst

    PubMed Central

    Nissen, Silas Boye; Perera, Marta; Gonzalez, Javier Martin; Morgani, Sophie M.; Jensen, Mogens H.; Sneppen, Kim; Brickman, Joshua M.

    2017-01-01

    Early mammalian development is both highly regulative and self-organizing. It involves the interplay of cell position, predetermined gene regulatory networks, and environmental interactions to generate the physical arrangement of the blastocyst with precise timing. However, this process occurs in the absence of maternal information and in the presence of transcriptional stochasticity. How does the preimplantation embryo ensure robust, reproducible development in this context? It utilizes a versatile toolbox that includes complex intracellular networks coupled to cell—cell communication, segregation by differential adhesion, and apoptosis. Here, we ask whether a minimal set of developmental rules based on this toolbox is sufficient for successful blastocyst development, and to what extent these rules can explain mutant and experimental phenotypes. We implemented experimentally reported mechanisms for polarity, cell—cell signaling, adhesion, and apoptosis as a set of developmental rules in an agent-based in silico model of physically interacting cells. We find that this model quantitatively reproduces specific mutant phenotypes and provides an explanation for the emergence of heterogeneity without requiring any initial transcriptional variation. It also suggests that a fixed time point for the cells’ competence of fibroblast growth factor (FGF)/extracellular signal—regulated kinase (ERK) sets an embryonic clock that enables certain scaling phenomena, a concept that we evaluate quantitatively by manipulating embryos in vitro. Based on these observations, we conclude that the minimal set of rules enables the embryo to experiment with stochastic gene expression and could provide the robustness necessary for the evolutionary diversification of the preimplantation gene regulatory network. PMID:28700688

  11. Exploring the parameter space of the coarse-grained UNRES force field by random search: selecting a transferable medium-resolution force field.

    PubMed

    He, Yi; Xiao, Yi; Liwo, Adam; Scheraga, Harold A

    2009-10-01

    We explored the energy-parameter space of our coarse-grained UNRES force field for large-scale ab initio simulations of protein folding, to obtain good initial approximations for hierarchical optimization of the force field with new virtual-bond-angle bending and side-chain-rotamer potentials which we recently introduced to replace the statistical potentials. 100 sets of energy-term weights were generated randomly, and good sets were selected by carrying out replica-exchange molecular dynamics simulations of two peptides with a minimal alpha-helical and a minimal beta-hairpin fold, respectively: the tryptophan cage (PDB code: 1L2Y) and tryptophan zipper (PDB code: 1LE1). Eight sets of parameters produced native-like structures of these two peptides. These eight sets were tested on two larger proteins: the engrailed homeodomain (PDB code: 1ENH) and FBP WW domain (PDB code: 1E0L); two sets were found to produce native-like conformations of these proteins. These two sets were tested further on a larger set of nine proteins with alpha or alpha + beta structure and found to locate native-like structures of most of them. These results demonstrate that, in addition to finding reasonable initial starting points for optimization, an extensive search of parameter space is a powerful method to produce a transferable force field. Copyright 2009 Wiley Periodicals, Inc.

  12. Quantum power functional theory for many-body dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, Matthias, E-mail: Matthias.Schmidt@uni-bayreuth.de

    2015-11-07

    We construct a one-body variational theory for the time evolution of nonrelativistic quantum many-body systems. The position- and time-dependent one-body density, particle current, and time derivative of the current act as three variational fields. The generating (power rate) functional is minimized by the true current time derivative. The corresponding Euler-Lagrange equation, together with the continuity equation for the density, forms a closed set of one-body equations of motion. Space- and time-nonlocal one-body forces are generated by the superadiabatic contribution to the functional. The theory applies to many-electron systems.

  13. Three essays on pricing and risk management in electricity markets

    NASA Astrophysics Data System (ADS)

    Kotsan, Serhiy

    2005-07-01

    A set of three papers forms this dissertation. In the first paper I analyze an electricity market that does not clear. The system operator satisfies fixed demand at a fixed price, and attempts to minimize "cost" as indicated by independent generators' supply bids. No equilibrium exists in this situation, and the operator lacks information sufficient to minimize actual cost. As a remedy, we propose a simple efficient tax mechanism. With the tax, Nash equilibrium bids still diverge from marginal cost but nonetheless provide sufficient information to minimize actual cost, regardless of the tax rate or number of generators. The second paper examines a price mechanism with one price assigned for each level of bundled real and reactive power. Equilibrium allocation under this pricing approach raises system efficiency via better allocation of the reactive power reserves, neglected in the traditional pricing approach. Pricing reactive power should be considered in the bundle with real power since its cost is highly dependent on real power output. The efficiency of pricing approach is shown in the general case, and tested on the 30-bus IEEE network with piecewise linear cost functions of the generators. Finally the third paper addresses the problem of optimal investment in generation based on mean-variance portfolio analysis. It is assumed the investor can freely create a portfolio of shares in generation located on buses of the electrical network. Investors are risk averse, and seek to minimize the variance of the weighted average Locational Marginal Price (LMP) in their portfolio, and to maximize its expected value. I conduct simulations using a standard IEEE 68-bus network that resembles the New York - New England system and calculate LMPs in accordance with the PJM methodology for a fully optimal AC power flow solution. Results indicate that the network topology is a crucial determinant of the investment decision as line congestion makes it difficult to deliver power to certain nodes at system peak load. Determining those nodes is an important task for an investor in generation as well as the transmission system operator.

  14. Comparative assessment of LANDSAT-D MSS and TM data quality for mapping applications in the Southeast

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Rectifications of multispectral scanner and thematic mapper data sets for full and subscene areas, analyses of planimetric errors, assessments of the number and distribution of ground control points required to minimize errors, and factors contributing to error residual are examined. Other investigations include the generation of three dimensional terrain models and the effects of spatial resolution on digital classification accuracies.

  15. Comparison of Cornea Module and DermaInspect for noninvasive imaging of ocular surface pathologies

    NASA Astrophysics Data System (ADS)

    Steven, Philipp; Müller, Maya; Koop, Norbert; Rose, Christian; Hüttmann, Gereon

    2009-11-01

    Minimally invasive imaging of ocular surface pathologies aims at securing clinical diagnosis without actual tissue probing. For this matter, confocal microscopy (Cornea Module) is in daily use in ophthalmic practice. Multiphoton microscopy is a new optical technique that enables high-resolution imaging and functional analysis of living tissues based on tissue autofluorescence. This study was set up to compare the potential of a multiphoton microscope (DermaInspect) to the Cornea Module. Ocular surface pathologies such as pterygia, papillomae, and nevi were investigated in vivo using the Cornea Module and imaged immediately after excision by DermaInspect. Two excitation wavelengths, fluorescence lifetime imaging and second-harmonic generation (SHG), were used to discriminate different tissue structures. Images were compared with the histopathological assessment of the samples. At wavelengths of 730 nm, multiphoton microscopy exclusively revealed cellular structures. Collagen fibrils were specifically demonstrated by second-harmonic generation. Measurements of fluorescent lifetimes enabled the highly specific detection of goblet cells, erythrocytes, and nevus-cell clusters. At the settings used, DermaInspect reaches higher resolutions than the Cornea Module and obtains additional structural information. The parallel detection of multiphoton excited autofluorescence and confocal imaging could expand the possibilities of minimally invasive investigation of the ocular surface toward functional analysis at higher resolutions.

  16. Taking the First Steps towards a Standard for Reporting on Phylogenies: Minimal Information about a Phylogenetic Analysis (MIAPA)

    PubMed Central

    LEEBENS-MACK, JIM; VISION, TODD; BRENNER, ERIC; BOWERS, JOHN E.; CANNON, STEVEN; CLEMENT, MARK J.; CUNNINGHAM, CLIFFORD W.; dePAMPHILIS, CLAUDE; deSALLE, ROB; DOYLE, JEFF J.; EISEN, JONATHAN A.; GU, XUN; HARSHMAN, JOHN; JANSEN, ROBERT K.; KELLOGG, ELIZABETH A.; KOONIN, EUGENE V.; MISHLER, BRENT D.; PHILIPPE, HERVÉ; PIRES, J. CHRIS; QIU, YIN-LONG; RHEE, SEUNG Y.; SJÖLANDER, KIMMEN; SOLTIS, DOUGLAS E.; SOLTIS, PAMELA S.; STEVENSON, DENNIS W.; WALL, KERR; WARNOW, TANDY; ZMASEK, CHRISTIAN

    2011-01-01

    In the eight years since phylogenomics was introduced as the intersection of genomics and phylogenetics, the field has provided fundamental insights into gene function, genome history and organismal relationships. The utility of phylogenomics is growing with the increase in the number and diversity of taxa for which whole genome and large transcriptome sequence sets are being generated. We assert that the synergy between genomic and phylogenetic perspectives in comparative biology would be enhanced by the development and refinement of minimal reporting standards for phylogenetic analyses. Encouraged by the development of the Minimum Information About a Microarray Experiment (MIAME) standard, we propose a similar roadmap for the development of a Minimal Information About a Phylogenetic Analysis (MIAPA) standard. Key in the successful development and implementation of such a standard will be broad participation by developers of phylogenetic analysis software, phylogenetic database developers, practitioners of phylogenomics, and journal editors. PMID:16901231

  17. Hierarchical planning for a surface mounting machine placement.

    PubMed

    Zeng, You-jiao; Ma, Deng-ze; Jin, Ye; Yan, Jun-qi

    2004-11-01

    For a surface mounting machine (SMM) in printed circuit board (PCB) assembly line, there are four problems, e.g. CAD data conversion, nozzle selection, feeder assignment and placement sequence determination. A hierarchical planning for them to maximize the throughput rate of an SMM is presented here. To minimize set-up time, a CAD data conversion system was first applied that could automatically generate the data for machine placement from CAD design data files. Then an effective nozzle selection approach implemented to minimize the time of nozzle changing. And then, to minimize picking time, an algorithm for feeder assignment was used to make picking multiple components simultaneously as much as possible. Finally, in order to shorten pick-and-place time, a heuristic algorithm was used to determine optimal component placement sequence according to the decided feeder positions. Experiments were conducted on a four head SMM. The experimental results were used to analyse the assembly line performance.

  18. Automated crystallographic system for high-throughput protein structure determination.

    PubMed

    Brunzelle, Joseph S; Shafaee, Padram; Yang, Xiaojing; Weigand, Steve; Ren, Zhong; Anderson, Wayne F

    2003-07-01

    High-throughput structural genomic efforts require software that is highly automated, distributive and requires minimal user intervention to determine protein structures. Preliminary experiments were set up to test whether automated scripts could utilize a minimum set of input parameters and produce a set of initial protein coordinates. From this starting point, a highly distributive system was developed that could determine macromolecular structures at a high throughput rate, warehouse and harvest the associated data. The system uses a web interface to obtain input data and display results. It utilizes a relational database to store the initial data needed to start the structure-determination process as well as generated data. A distributive program interface administers the crystallographic programs which determine protein structures. Using a test set of 19 protein targets, 79% were determined automatically.

  19. Measurement of tracer gas distributions using an open-path FTIR system coupled with computed tomography

    NASA Astrophysics Data System (ADS)

    Drescher, Anushka C.; Yost, Michael G.; Park, Doo Y.; Levine, Steven P.; Gadgil, Ashok J.; Fischer, Marc L.; Nazaroff, William W.

    1995-05-01

    Optical remote sensing and iterative computed tomography (CT) can be combined to measure the spatial distribution of gaseous pollutant concentrations in a plane. We have conducted chamber experiments to test this combination of techniques using an Open Path Fourier Transform Infrared Spectrometer (OP-FTIR) and a standard algebraic reconstruction technique (ART). ART was found to converge to solutions that showed excellent agreement with the ray integral concentrations measured by the FTIR but were inconsistent with simultaneously gathered point sample concentration measurements. A new CT method was developed based on (a) the superposition of bivariate Gaussians to model the concentration distribution and (b) a simulated annealing minimization routine to find the parameters of the Gaussians that resulted in the best fit to the ray integral concentration data. This new method, named smooth basis function minimization (SBFM) generated reconstructions that agreed well, both qualitatively and quantitatively, with the concentration profiles generated from point sampling. We present one set of illustrative experimental data to compare the performance of ART and SBFM.

  20. Learning without labeling: domain adaptation for ultrasound transducer localization.

    PubMed

    Heimann, Tobias; Mountney, Peter; John, Matthias; Ionasec, Razvan

    2013-01-01

    The fusion of image data from trans-esophageal echography (TEE) and X-ray fluoroscopy is attracting increasing interest in minimally-invasive treatment of structural heart disease. In order to calculate the needed transform between both imaging systems, we employ a discriminative learning based approach to localize the TEE transducer in X-ray images. Instead of time-consuming manual labeling, we generate the required training data automatically from a single volumetric image of the transducer. In order to adapt this system to real X-ray data, we use unlabeled fluoroscopy images to estimate differences in feature space density and correct covariate shift by instance weighting. An evaluation on more than 1900 images reveals that our approach reduces detection failures by 95% compared to cross validation on the test set and improves the localization error from 1.5 to 0.8 mm. Due to the automatic generation of training data, the proposed system is highly flexible and can be adapted to any medical device with minimal efforts.

  1. A New Distributed Optimization for Community Microgrids Scheduling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Starke, Michael R; Tomsovic, Kevin

    This paper proposes a distributed optimization model for community microgrids considering the building thermal dynamics and customer comfort preference. The microgrid central controller (MCC) minimizes the total cost of operating the community microgrid, including fuel cost, purchasing cost, battery degradation cost and voluntary load shedding cost based on the customers' consumption, while the building energy management systems (BEMS) minimize their electricity bills as well as the cost associated with customer discomfort due to room temperature deviation from the set point. The BEMSs and the MCC exchange information on energy consumption and prices. When the optimization converges, the distributed generation scheduling,more » energy storage charging/discharging and customers' consumption as well as the energy prices are determined. In particular, we integrate the detailed thermal dynamic characteristics of buildings into the proposed model. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of proposed model.« less

  2. Optimizing Monitoring Designs under Alternative Objectives

    DOE PAGES

    Gastelum, Jason A.; USA, Richland Washington; Porter, Ellen A.; ...

    2014-12-31

    This paper describes an approach to identify monitoring designs that optimize detection of CO2 leakage from a carbon capture and sequestration (CCS) reservoir and compares the results generated under two alternative objective functions. The first objective function minimizes the expected time to first detection of CO2 leakage, the second more conservative objective function minimizes the maximum time to leakage detection across the set of realizations. The approach applies a simulated annealing algorithm that searches the solution space by iteratively mutating the incumbent monitoring design. The approach takes into account uncertainty by evaluating the performance of potential monitoring designs across amore » set of simulated leakage realizations. The approach relies on a flexible two-tiered signature to infer that CO2 leakage has occurred. This research is part of the National Risk Assessment Partnership, a U.S. Department of Energy (DOE) project tasked with conducting risk and uncertainty analysis in the areas of reservoir performance, natural leakage pathways, wellbore integrity, groundwater protection, monitoring, and systems level modeling.« less

  3. KMCLib 1.1: Extended random number support and technical updates to the KMCLib general framework for kinetic Monte-Carlo simulations

    NASA Astrophysics Data System (ADS)

    Leetmaa, Mikael; Skorodumova, Natalia V.

    2015-11-01

    We here present a revised version, v1.1, of the KMCLib general framework for kinetic Monte-Carlo (KMC) simulations. The generation of random numbers in KMCLib now relies on the C++11 standard library implementation, and support has been added for the user to choose from a set of C++11 implemented random number generators. The Mersenne-twister, the 24 and 48 bit RANLUX and a 'minimal-standard' PRNG are supported. We have also included the possibility to use true random numbers via the C++11 std::random_device generator. This release also includes technical updates to support the use of an extended range of operating systems and compilers.

  4. Suitability of Unidata Metapps for Incorporation in Platform-Independent User-Customized Aviation Weather Products Generation Software

    DTIC Science & Technology

    2002-03-08

    Figure 7. Standard, simplified view of the Facade software design pattern. Adapted from an original diagram by Shalloway and Trott (Shalloway...and Trott , 2002). 31 set of interfaces. The motivation behind using this design pattern is that it helps reduce complexity and minimizes the...libraries and in turn built more complex components. Although brave and innovative , these forays into the cutting edge of geophysical

  5. Concentric Tube Robot Design and Optimization Based on Task and Anatomical Constraints

    PubMed Central

    Bergeles, Christos; Gosline, Andrew H.; Vasilyev, Nikolay V.; Codd, Patrick J.; del Nido, Pedro J.; Dupont, Pierre E.

    2015-01-01

    Concentric tube robots are catheter-sized continuum robots that are well suited for minimally invasive surgery inside confined body cavities. These robots are constructed from sets of pre-curved superelastic tubes and are capable of assuming complex 3D curves. The family of 3D curves that the robot can assume depends on the number, curvatures, lengths and stiffnesses of the tubes in its tube set. The robot design problem involves solving for a tube set that will produce the family of curves necessary to perform a surgical procedure. At a minimum, these curves must enable the robot to smoothly extend into the body and to manipulate tools over the desired surgical workspace while respecting anatomical constraints. This paper introduces an optimization framework that utilizes procedureor patient-specific image-based anatomical models along with surgical workspace requirements to generate robot tube set designs. The algorithm searches for designs that minimize robot length and curvature and for which all paths required for the procedure consist of stable robot configurations. Two mechanics-based kinematic models are used. Initial designs are sought using a model assuming torsional rigidity. These designs are then refined using a torsionally-compliant model. The approach is illustrated with clinically relevant examples from neurosurgery and intracardiac surgery. PMID:26380575

  6. Cerebella segmentation on MR images of pediatric patients with medulloblastoma

    NASA Astrophysics Data System (ADS)

    Shan, Zu Y.; Ji, Qing; Glass, John; Gajjar, Amar; Reddick, Wilburn E.

    2005-04-01

    In this study, an automated method has been developed to identify the cerebellum from T1-weighted MR brain images of patients with medulloblastoma. A new objective function that is similar to Gibbs free energy in classic physics was defined; and the brain structure delineation was viewed as a process of minimizing Gibbs free energy. We used a rigid-body registration and an active contour (snake) method to minimize the Gibbs free energy in this study. The method was applied to 20 patient data sets to generate cerebellum images and volumetric results. The generated cerebellum images were compared with two manually drawn results. Strong correlations were found between the automatically and manually generated volumetric results, the correlation coefficients with each of manual results were 0.971 and 0.974, respectively. The average Jaccard similarities with each of two manual results were 0.89 and 0.88, respectively. The average Kappa indexes with each of two manual results were 0.94 and 0.93, respectively. These results showed this method was both robust and accurate for cerebellum segmentation. The method may be applied to various research and clinical investigation in which cerebellum segmentation and quantitative MR measurement of cerebellum are needed.

  7. Modern control techniques in active flutter suppression using a control moment gyro

    NASA Technical Reports Server (NTRS)

    Buchek, P. M.

    1974-01-01

    Development of organized synthesis techniques, using concepts of modern control theory was studied for the design of active flutter suppression systems for two and three-dimensional lifting surfaces, utilizing a control moment gyro (CMG) to generate the required control torques. Incompressible flow theory is assumed, with the unsteady aerodynamic forces and moments for arbitrary airfoil motion obtained by using the convolution integral based on Wagner's indicial lift function. Linear optimal control theory is applied to find particular optimal sets of gain values which minimize a quadratic performance function. The closed loop system's response to impulsive gust disturbances and the resulting control power requirements are investigated, and the system eigenvalues necessary to minimize the maximum value of control power are determined.

  8. A simplified Sanger sequencing method for full genome sequencing of multiple subtypes of human influenza A viruses.

    PubMed

    Deng, Yi-Mo; Spirason, Natalie; Iannello, Pina; Jelley, Lauren; Lau, Hilda; Barr, Ian G

    2015-07-01

    Full genome sequencing of influenza A viruses (IAV), including those that arise from annual influenza epidemics, is undertaken to determine if reassorting has occurred or if other pathogenic traits are present. Traditionally IAV sequencing has been biased toward the major surface glycoproteins haemagglutinin and neuraminidase, while the internal genes are often ignored. Despite the development of next generation sequencing (NGS), many laboratories are still reliant on conventional Sanger sequencing to sequence IAV. To develop a minimal and robust set of primers for Sanger sequencing of the full genome of IAV currently circulating in humans. A set of 13 primer pairs was designed that enabled amplification of the six internal genes of multiple human IAV subtypes including the recent avian influenza A(H7N9) virus from China. Specific primers were designed to amplify the HA and NA genes of each IAV subtype of interest. Each of the primers also incorporated a binding site at its 5'-end for either a forward or reverse M13 primer, such that only two M13 primers were required for all subsequent sequencing reactions. This minimal set of primers was suitable for sequencing the six internal genes of all currently circulating human seasonal influenza A subtypes as well as the avian A(H7N9) viruses that have infected humans in China. This streamlined Sanger sequencing protocol could be used to generate full genome sequence data more rapidly and easily than existing influenza genome sequencing protocols. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  9. A broadcast-based key agreement scheme using set reconciliation for wireless body area networks.

    PubMed

    Ali, Aftab; Khan, Farrukh Aslam

    2014-05-01

    Information and communication technologies have thrived over the last few years. Healthcare systems have also benefited from this progression. A wireless body area network (WBAN) consists of small, low-power sensors used to monitor human physiological values remotely, which enables physicians to remotely monitor the health of patients. Communication security in WBANs is essential because it involves human physiological data. Key agreement and authentication are the primary issues in the security of WBANs. To agree upon a common key, the nodes exchange information with each other using wireless communication. This information exchange process must be secure enough or the information exchange should be minimized to a certain level so that if information leak occurs, it does not affect the overall system. Most of the existing solutions for this problem exchange too much information for the sake of key agreement; getting this information is sufficient for an attacker to reproduce the key. Set reconciliation is a technique used to reconcile two similar sets held by two different hosts with minimal communication complexity. This paper presents a broadcast-based key agreement scheme using set reconciliation for secure communication in WBANs. The proposed scheme allows the neighboring nodes to agree upon a common key with the personal server (PS), generated from the electrocardiogram (EKG) feature set of the host body. Minimal information is exchanged in a broadcast manner, and even if every node is missing a different subset, by reconciling these feature sets, the whole network will still agree upon a single common key. Because of the limited information exchange, if an attacker gets the information in any way, he/she will not be able to reproduce the key. The proposed scheme mitigates replay, selective forwarding, and denial of service attacks using a challenge-response authentication mechanism. The simulation results show that the proposed scheme has a great deal of adoptability in terms of security, communication overhead, and running time complexity, as compared to the existing EKG-based key agreement scheme.

  10. A strategy to find minimal energy nanocluster structures.

    PubMed

    Rogan, José; Varas, Alejandro; Valdivia, Juan Alejandro; Kiwi, Miguel

    2013-11-05

    An unbiased strategy to search for the global and local minimal energy structures of free standing nanoclusters is presented. Our objectives are twofold: to find a diverse set of low lying local minima, as well as the global minimum. To do so, we use massively the fast inertial relaxation engine algorithm as an efficient local minimizer. This procedure turns out to be quite efficient to reach the global minimum, and also most of the local minima. We test the method with the Lennard-Jones (LJ) potential, for which an abundant literature does exist, and obtain novel results, which include a new local minimum for LJ13 , 10 new local minima for LJ14 , and thousands of new local minima for 15≤N≤65. Insights on how to choose the initial configurations, analyzing the effectiveness of the method in reaching low-energy structures, including the global minimum, are developed as a function of the number of atoms of the cluster. Also, a novel characterization of the potential energy surface, analyzing properties of the local minima basins, is provided. The procedure constitutes a promising tool to generate a diverse set of cluster conformations, both two- and three-dimensional, that can be used as an input for refinement by means of ab initio methods. Copyright © 2013 Wiley Periodicals, Inc.

  11. Secure Distributed Time for Secure Distributed Protocols

    DTIC Science & Technology

    1994-09-01

    minimal generating set of X = UV (A) AEY Implications Suppose (M, M’) is an acyclic Typ, -tent and independent) parallel pair. A timeslice containing...compromise the system if the attacker is willing to pay tremendous amounts of money . (For a detailed analysis of the cost, see [Wein9l 1.) What do we do...example, suppose auditor Alice is asking for a snapshot to verify that the electronic currency in circulation sums correctly. If counterfeiter Bad

  12. Does rational selection of training and test sets improve the outcome of QSAR modeling?

    PubMed

    Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander

    2012-10-22

    Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.

  13. WE-AB-209-07: Explicit and Convex Optimization of Plan Quality Metrics in Intensity-Modulated Radiation Therapy Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engberg, L; KTH Royal Institute of Technology, Stockholm; Eriksson, K

    Purpose: To formulate objective functions of a multicriteria fluence map optimization model that correlate well with plan quality metrics, and to solve this multicriteria model by convex approximation. Methods: In this study, objectives of a multicriteria model are formulated to explicitly either minimize or maximize a dose-at-volume measure. Given the widespread agreement that dose-at-volume levels play important roles in plan quality assessment, these objectives correlate well with plan quality metrics. This is in contrast to the conventional objectives, which are to maximize clinical goal achievement by relating to deviations from given dose-at-volume thresholds: while balancing the new objectives means explicitlymore » balancing dose-at-volume levels, balancing the conventional objectives effectively means balancing deviations. Constituted by the inherently non-convex dose-at-volume measure, the new objectives are approximated by the convex mean-tail-dose measure (CVaR measure), yielding a convex approximation of the multicriteria model. Results: Advantages of using the convex approximation are investigated through juxtaposition with the conventional objectives in a computational study of two patient cases. Clinical goals of each case respectively point out three ROI dose-at-volume measures to be considered for plan quality assessment. This is translated in the convex approximation into minimizing three mean-tail-dose measures. Evaluations of the three ROI dose-at-volume measures on Pareto optimal plans are used to represent plan quality of the Pareto sets. Besides providing increased accuracy in terms of feasibility of solutions, the convex approximation generates Pareto sets with overall improved plan quality. In one case, the Pareto set generated by the convex approximation entirely dominates that generated with the conventional objectives. Conclusion: The initial computational study indicates that the convex approximation outperforms the conventional objectives in aspects of accuracy and plan quality.« less

  14. Parametric analysis of parameters for electrical-load forecasting using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Gerber, William J.; Gonzalez, Avelino J.; Georgiopoulos, Michael

    1997-04-01

    Accurate total system electrical load forecasting is a necessary part of resource management for power generation companies. The better the hourly load forecast, the more closely the power generation assets of the company can be configured to minimize the cost. Automating this process is a profitable goal and neural networks should provide an excellent means of doing the automation. However, prior to developing such a system, the optimal set of input parameters must be determined. The approach of this research was to determine what those inputs should be through a parametric study of potentially good inputs. Input parameters tested were ambient temperature, total electrical load, the day of the week, humidity, dew point temperature, daylight savings time, length of daylight, season, forecast light index and forecast wind velocity. For testing, a limited number of temperatures and total electrical loads were used as a basic reference input parameter set. Most parameters showed some forecasting improvement when added individually to the basic parameter set. Significantly, major improvements were exhibited with the day of the week, dew point temperatures, additional temperatures and loads, forecast light index and forecast wind velocity.

  15. BioPCD - A Language for GUI Development Requiring a Minimal Skill Set.

    PubMed

    Alvare, Graham Gm; Roche-Lima, Abiel; Fristensky, Brian

    2012-11-01

    BioPCD is a new language whose purpose is to simplify the creation of Graphical User Interfaces (GUIs) by biologists with minimal programming skills. The first step in developing BioPCD was to create a minimal superset of the language referred to as PCD (Pythonesque Command Description). PCD defines the core of terminals and high-level nonterminals required to describe data of almost any type. BioPCD adds to PCD the constructs necessary to describe GUI components and the syntax for executing system commands. BioPCD is implemented using JavaCC to convert the grammar into code. BioPCD is designed to be terse and readable and simple enough to be learned by copying and modifying existing BioPCD files. We demonstrate that BioPCD can easily be used to generate GUIs for existing command line programs. Although BioPCD was designed to make it easier to run bioinformatics programs, it could be used in any domain in which many useful command line programs exist that do not have GUI interfaces.

  16. Tire Force Estimation using a Proportional Integral Observer

    NASA Astrophysics Data System (ADS)

    Farhat, Ahmad; Koenig, Damien; Hernandez-Alcantara, Diana; Morales-Menendez, Ruben

    2017-01-01

    This paper addresses a method for detecting critical stability situations in the lateral vehicle dynamics by estimating the non-linear part of the tire forces. These forces indicate the road holding performance of the vehicle. The estimation method is based on a robust fault detection and estimation approach which minimize the disturbance and uncertainties to residual sensitivity. It consists in the design of a Proportional Integral Observer (PIO), while minimizing the well known H ∞ norm for the worst case uncertainties and disturbance attenuation, and combining a transient response specification. This multi-objective problem is formulated as a Linear Matrix Inequalities (LMI) feasibility problem where a cost function subject to LMI constraints is minimized. This approach is employed to generate a set of switched robust observers for uncertain switched systems, where the convergence of the observer is ensured using a Multiple Lyapunov Function (MLF). Whilst the forces to be estimated can not be physically measured, a simulation scenario with CarSimTM is presented to illustrate the developed method.

  17. Optimization principles and the figure of merit for triboelectric generators.

    PubMed

    Peng, Jun; Kang, Stephen Dongmin; Snyder, G Jeffrey

    2017-12-01

    Energy harvesting with triboelectric nanogenerators is a burgeoning field, with a growing portfolio of creative application schemes attracting much interest. Although power generation capabilities and its optimization are one of the most important subjects, a satisfactory elemental model that illustrates the basic principles and sets the optimization guideline remains elusive. We use a simple model to clarify how the energy generation mechanism is electrostatic induction but with a time-varying character that makes the optimal matching for power generation more restrictive. By combining multiple parameters into dimensionless variables, we pinpoint the optimum condition with only two independent parameters, leading to predictions of the maximum limit of power density, which allows us to derive the triboelectric material and device figure of merit. We reveal the importance of optimizing device capacitance, not only load resistance, and minimizing the impact of parasitic capacitance. Optimized capacitances can lead to an overall increase in power density of more than 10 times.

  18. Toward the International Classification of Functioning, Disability and Health (ICF) Rehabilitation Set: A Minimal Generic Set of Domains for Rehabilitation as a Health Strategy.

    PubMed

    Prodinger, Birgit; Cieza, Alarcos; Oberhauser, Cornelia; Bickenbach, Jerome; Üstün, Tevfik Bedirhan; Chatterji, Somnath; Stucki, Gerold

    2016-06-01

    To develop a comprehensive set of the International Classification of Functioning, Disability and Health (ICF) categories as a minimal standard for reporting and assessing functioning and disability in clinical populations along the continuum of care. The specific aims were to specify the domains of functioning recommended for an ICF Rehabilitation Set and to identify a minimal set of environmental factors (EFs) to be used alongside the ICF Rehabilitation Set when describing disability across individuals and populations with various health conditions. Secondary analysis of existing data sets using regression methods (Random Forests and Group Lasso regression) and expert consultations. Along the continuum of care, including acute, early postacute, and long-term and community rehabilitation settings. Persons (N=9863) with various health conditions participated in primary studies. The number of respondents for whom the dependent variable data were available and used in this analysis was 9264. Not applicable. For regression analyses, self-reported general health was used as a dependent variable. The ICF categories from the functioning component and the EF component were used as independent variables for the development of the ICF Rehabilitation Set and the minimal set of EFs, respectively. Thirty ICF categories to be complemented with 12 EFs were identified as relevant to the identified ICF sets. The ICF Rehabilitation Set constitutes of 9 ICF categories from the component body functions and 21 from the component activities and participation. The minimal set of EFs contains 12 categories spanning all chapters of the EF component of the ICF. The identified sets proposed serve as minimal generic sets of aspects of functioning in clinical populations for reporting data within and across heath conditions, time, clinical settings including rehabilitation, and countries. These sets present a reference framework for harmonizing existing information on disability across general and clinical populations. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  19. Determination of plutonium in nitric acid solutions using energy dispersive L X-ray fluorescence with a low power X-ray generator

    NASA Astrophysics Data System (ADS)

    Py, J.; Groetz, J.-E.; Hubinois, J.-C.; Cardona, D.

    2015-04-01

    This work presents the development of an in-line energy dispersive L X-ray fluorescence spectrometer set-up, with a low power X-ray generator and a secondary target, for the determination of plutonium concentration in nitric acid solutions. The intensity of the L X-rays from the internal conversion and gamma rays emitted by the daughter nuclei from plutonium is minimized and corrected, in order to eliminate the interferences with the L X-ray fluorescence spectrum. The matrix effects are then corrected by the Compton peak method. A calibration plot for plutonium solutions within the range 0.1-20 g L-1 is given.

  20. Application of computational fluid dynamics to the study of vortex flow control for the management of inlet distortion

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.; Gibb, James

    1992-01-01

    A study is presented to demonstrate that the Reduced Navier-Stokes code RNS3D can be employed effectively to develop a vortex generator installation that minimizes engine face circumferential distortion by controlling the development of secondary flow. The necessary computing times are small enough to show that similar studies are feasible within an analysis-design environment with all its constraints of costs and time. This study establishes the nature of the performance enhancements that can be realized with vortex flow control, and indicates a set of aerodynamic properties that can be utilized to arrive at a successful vortex generator installation design.

  1. Space-based surface wind vectors to aid understanding of air-sea interactions

    NASA Technical Reports Server (NTRS)

    Atlas, R.; Bloom, S. C.; Hoffman, R. N.; Ardizzone, J. V.; Brin, G.

    1991-01-01

    A novel and unique ocean-surface wind data-set has been derived by combining the Defense Meteorological Satellite Program Special Sensor Microwave Imager data with additional conventional data. The variational analysis used generates a gridded surface wind analysis that minimizes an objective function measuring the misfit of the analysis to the background, the data, and certain a priori constraints. In the present case, the European Center for Medium-Range Weather Forecasts surface-wind analysis is used as the background.

  2. Reconstruction of extended Petri nets from time series data and its application to signal transduction and to gene regulatory networks

    PubMed Central

    2011-01-01

    Background Network inference methods reconstruct mathematical models of molecular or genetic networks directly from experimental data sets. We have previously reported a mathematical method which is exclusively data-driven, does not involve any heuristic decisions within the reconstruction process, and deliveres all possible alternative minimal networks in terms of simple place/transition Petri nets that are consistent with a given discrete time series data set. Results We fundamentally extended the previously published algorithm to consider catalysis and inhibition of the reactions that occur in the underlying network. The results of the reconstruction algorithm are encoded in the form of an extended Petri net involving control arcs. This allows the consideration of processes involving mass flow and/or regulatory interactions. As a non-trivial test case, the phosphate regulatory network of enterobacteria was reconstructed using in silico-generated time-series data sets on wild-type and in silico mutants. Conclusions The new exact algorithm reconstructs extended Petri nets from time series data sets by finding all alternative minimal networks that are consistent with the data. It suggested alternative molecular mechanisms for certain reactions in the network. The algorithm is useful to combine data from wild-type and mutant cells and may potentially integrate physiological, biochemical, pharmacological, and genetic data in the form of a single model. PMID:21762503

  3. Simplified DFT methods for consistent structures and energies of large systems

    NASA Astrophysics Data System (ADS)

    Caldeweyher, Eike; Gerit Brandenburg, Jan

    2018-05-01

    Kohn–Sham density functional theory (DFT) is routinely used for the fast electronic structure computation of large systems and will most likely continue to be the method of choice for the generation of reliable geometries in the foreseeable future. Here, we present a hierarchy of simplified DFT methods designed for consistent structures and non-covalent interactions of large systems with particular focus on molecular crystals. The covered methods are a minimal basis set Hartree–Fock (HF-3c), a small basis set screened exchange hybrid functional (HSE-3c), and a generalized gradient approximated functional evaluated in a medium-sized basis set (B97-3c), all augmented with semi-classical correction potentials. We give an overview on the methods design, a comprehensive evaluation on established benchmark sets for geometries and lattice energies of molecular crystals, and highlight some realistic applications on large organic crystals with several hundreds of atoms in the primitive unit cell.

  4. Path integrals and large deviations in stochastic hybrid systems.

    PubMed

    Bressloff, Paul C; Newby, Jay M

    2014-04-01

    We construct a path-integral representation of solutions to a stochastic hybrid system, consisting of one or more continuous variables evolving according to a piecewise-deterministic dynamics. The differential equations for the continuous variables are coupled to a set of discrete variables that satisfy a continuous-time Markov process, which means that the differential equations are only valid between jumps in the discrete variables. Examples of stochastic hybrid systems arise in biophysical models of stochastic ion channels, motor-driven intracellular transport, gene networks, and stochastic neural networks. We use the path-integral representation to derive a large deviation action principle for a stochastic hybrid system. Minimizing the associated action functional with respect to the set of all trajectories emanating from a metastable state (assuming that such a minimization scheme exists) then determines the most probable paths of escape. Moreover, evaluating the action functional along a most probable path generates the so-called quasipotential used in the calculation of mean first passage times. We illustrate the theory by considering the optimal paths of escape from a metastable state in a bistable neural network.

  5. Correlation between the norm and the geometry of minimal networks

    NASA Astrophysics Data System (ADS)

    Laut, I. L.

    2017-05-01

    The paper is concerned with the inverse problem of the minimal Steiner network problem in a normed linear space. Namely, given a normed space in which all minimal networks are known for any finite point set, the problem is to describe all the norms on this space for which the minimal networks are the same as for the original norm. We survey the available results and prove that in the plane a rotund differentiable norm determines a distinctive set of minimal Steiner networks. In a two-dimensional space with rotund differentiable norm the coordinates of interior vertices of a nondegenerate minimal parametric network are shown to vary continuously under small deformations of the boundary set, and the turn direction of the network is determined. Bibliography: 15 titles.

  6. Method of grid generation

    DOEpatents

    Barnette, Daniel W.

    2002-01-01

    The present invention provides a method of grid generation that uses the geometry of the problem space and the governing relations to generate a grid. The method can generate a grid with minimized discretization errors, and with minimal user interaction. The method of the present invention comprises assigning grid cell locations so that, when the governing relations are discretized using the grid, at least some of the discretization errors are substantially zero. Conventional grid generation is driven by the problem space geometry; grid generation according to the present invention is driven by problem space geometry and by governing relations. The present invention accordingly can provide two significant benefits: more efficient and accurate modeling since discretization errors are minimized, and reduced cost grid generation since less human interaction is required.

  7. Venus cloud bobber mission: A long term survey of the Venusian surface

    NASA Technical Reports Server (NTRS)

    Wai, James; Derengowski, Cheryl; Lautzenhiser, Russ; Emerson, Matt; Choi, Yongho

    1994-01-01

    We have examined the Venus Balloon concept in order to further develop the ideas and concepts behind it, and to creatively apply them to the design of the major Venus Balloon components. This report presents our models of the vertical path taken by the Venus Balloon and the entry into Venusian atmosphere. It also details our designs of the balloon, gondola, heat exchanger, power generator, and entry module. A vehicle is designed for a ballistic entry into the Venusian atmosphere, and an atmospheric model is created. The model is then used to set conditions. The shape and material of the vehicle are optimized, and the dimensions of the vehicle are then determined. Equipment is chosen and detailed that will be needed to collect and transmit information and control the mission. A gondola is designed that will enable this sensitive electronic equipment to survive in an atmosphere of very high temperature and pressure. This shape and the material of the shell are optimized, and the size is minimized. Insulation and supporting structures are designed to protect the payload equipment and to minimize mass. A method of cooling the gondola at upper altitudes was established. Power needs of the gondola equipment are determined. Power generation options are discussed and two separate thermoelectric generation models are outlined.

  8. Ensuring the Reliable Operation of the Power Grid: State-Based and Distributed Approaches to Scheduling Energy and Contingency Reserves

    NASA Astrophysics Data System (ADS)

    Prada, Jose Fernando

    Keeping a contingency reserve in power systems is necessary to preserve the security of real-time operations. This work studies two different approaches to the optimal allocation of energy and reserves in the day-ahead generation scheduling process. Part I presents a stochastic security-constrained unit commitment model to co-optimize energy and the locational reserves required to respond to a set of uncertain generation contingencies, using a novel state-based formulation. The model is applied in an offer-based electricity market to allocate contingency reserves throughout the power grid, in order to comply with the N-1 security criterion under transmission congestion. The objective is to minimize expected dispatch and reserve costs, together with post contingency corrective redispatch costs, modeling the probability of generation failure and associated post contingency states. The characteristics of the scheduling problem are exploited to formulate a computationally efficient method, consistent with established operational practices. We simulated the distribution of locational contingency reserves on the IEEE RTS96 system and compared the results with the conventional deterministic method. We found that assigning locational spinning reserves can guarantee an N-1 secure dispatch accounting for transmission congestion at a reasonable extra cost. The simulations also showed little value of allocating downward reserves but sizable operating savings from co-optimizing locational nonspinning reserves. Overall, the results indicate the computational tractability of the proposed method. Part II presents a distributed generation scheduling model to optimally allocate energy and spinning reserves among competing generators in a day-ahead market. The model is based on the coordination between individual generators and a market entity. The proposed method uses forecasting, augmented pricing and locational signals to induce efficient commitment of generators based on firm posted prices. It is price-based but does not rely on multiple iterations, minimizes information exchange and simplifies the market clearing process. Simulations of the distributed method performed on a six-bus test system showed that, using an appropriate set of prices, it is possible to emulate the results of a conventional centralized solution, without need of providing make-whole payments to generators. Likewise, they showed that the distributed method can accommodate transactions with different products and complex security constraints.

  9. Effect of surficial disturbance on exchange between groundwater and surface water in nearshore margins

    USGS Publications Warehouse

    Rosenberry, Donald O.; Toran, Laura; Nyquist, Jonathan E.

    2010-01-01

    Low‐permeability sediments situated at or near the sediment‐water interface can influence seepage in nearshore margins, particularly where wave energy or currents are minimal. Seepage meters were used to quantify flow across the sediment‐water interface at two lakes where flow was from surface water to groundwater. Disturbance of the sediment bed substantially increased seepage through the sandy sediments of both lakes. Seepage increased by factors of 2.6 to 7.7 following bed disturbance at seven of eight measurement locations at Mirror Lake, New Hampshire, where the sediment representing the greatest restriction to flow was situated at the sediment‐water interface. Although the veneer of low‐permeability sediment was very thin and easily disturbed, accumulation on the bed surface was aided by a physical setting that minimized wind‐generated waves and current. At Lake Belle Taine, Minnesota, where pre‐disturbance downward seepage was smaller than at Mirror Lake, but hydraulic gradients were very large, disturbance of a 20 to 30 cm thick medium sand layer resulted in increases in seepage of 2 to 3 orders of magnitude. Exceptionally large seepage rates, some exceeding 25,000 cm/d, were recorded following bed disturbance. Since it is common practice to walk on the bed while installing or making seepage measurements, disruption of natural seepage rates may be a common occurrence in nearshore seepage studies. Disturbance of the bed should be avoided or minimized when utilizing seepage meters in shallow, nearshore settings, particularly where waves or currents are infrequent or minimal.

  10. Optimal Inlet Shape Design of N2B Hybrid Wing Body Configuration

    NASA Technical Reports Server (NTRS)

    Kim, Hyoungjin; Liou, Meng-Sing

    2012-01-01

    The N2B hybrid wing body aircraft was conceptually designed to meet environmental and performance goals for the N+2 generation transport set by the Subsonic Fixed Wing project of NASA Fundamental Aeronautics Program. In the present study, flow simulations are conducted around the N2B configuration by a Reynolds-averaged Navier-Stokes flow solver using unstructured meshes. Boundary conditions at engine fan face and nozzle exhaust planes are provided by the NPSS thermodynamic engine cycle model. The flow simulations reveal challenging design issues arising from boundary layer ingestion offset inlet and airframe-propulsion integration. Adjoint-based optimal designs are then conducted for the inlet shape to minimize the airframe drag force and flow distortion at fan faces. Design surfaces are parameterized by NURBS, and the cowl lip geometry is modified by a spring analogy approach. By the drag minimization design, flow separation on the cowl surfaces are almost removed, and shock wave strength got remarkably reduced. For the distortion minimization design, a circumferential distortion indicator DPCP(sub avg) is adopted as the design objective and diffuser bottom and side wall surfaces are perturbed for the design. The distortion minimization results in a 12.5 % reduction in the objective function.

  11. Essays in renewable energy and emissions trading

    NASA Astrophysics Data System (ADS)

    Kneifel, Joshua D.

    Environmental issues have become a key political issue over the past forty years and has resulted in the enactment of many different environmental policies. The three essays in this dissertation add to the literature of renewable energy policies and sulfur dioxide emissions trading. The first essay ascertains which state policies are accelerating deployment of non-hydropower renewable electricity generation capacity into a states electric power industry. As would be expected, policies that lead to significant increases in actual renewable capacity in that state either set a Renewables Portfolio Standard with a certain level of required renewable capacity or use Clean Energy Funds to directly fund utility-scale renewable capacity construction. A surprising result is that Required Green Power Options, a policy that merely requires all utilities in a state to offer the option for consumers to purchase renewable energy at a premium rate, has a sizable impact on non-hydro renewable capacity in that state. The second essay studies the theoretical impacts fuel contract constraints have on an electricity generating unit's compliance costs of meeting the emissions compliance restrictions set by Phase I of the Title IV SO2 Emissions Trading Program. Fuel contract constraints restrict a utility's degrees of freedom in coal purchasing options, which can lead to the use of a more expensive compliance option and higher compliance costs. The third essay analytically and empirically shows how fuel contract constraints impact the emissions allowance market and total electric power industry compliance costs. This paper uses generating unit-level simulations to replicate results from previous studies and show that fuel contracts appear to explain a large portion (65%) of the previously unexplained compliance cost simulations. Also, my study considers a more appropriate plant-level decisions for compliance choices by analytically analyzing the plant level decision-making process to show how cost-minimization at the more complex plant level may deviate from cost-minimization at the generating unit level.

  12. A systematic approach to numerical dispersion in Maxwell solvers

    NASA Astrophysics Data System (ADS)

    Blinne, Alexander; Schinkel, David; Kuschel, Stephan; Elkina, Nina; Rykovanov, Sergey G.; Zepf, Matt

    2018-03-01

    The finite-difference time-domain (FDTD) method is a well established method for solving the time evolution of Maxwell's equations. Unfortunately the scheme introduces numerical dispersion and therefore phase and group velocities which deviate from the correct values. The solution to Maxwell's equations in more than one dimension results in non-physical predictions such as numerical dispersion or numerical Cherenkov radiation emitted by a relativistic electron beam propagating in vacuum. Improved solvers, which keep the staggered Yee-type grid for electric and magnetic fields, generally modify the spatial derivative operator in the Maxwell-Faraday equation by increasing the computational stencil. These modified solvers can be characterized by different sets of coefficients, leading to different dispersion properties. In this work we introduce a norm function to rewrite the choice of coefficients into a minimization problem. We solve this problem numerically and show that the minimization procedure leads to phase and group velocities that are considerably closer to c as compared to schemes with manually set coefficients available in the literature. Depending on a specific problem at hand (e.g. electron beam propagation in plasma, high-order harmonic generation from plasma surfaces, etc.), the norm function can be chosen accordingly, for example, to minimize the numerical dispersion in a certain given propagation direction. Particle-in-cell simulations of an electron beam propagating in vacuum using our solver are provided.

  13. Gene Selection and Cancer Classification: A Rough Sets Based Approach

    NASA Astrophysics Data System (ADS)

    Sun, Lijun; Miao, Duoqian; Zhang, Hongyun

    Indentification of informative gene subsets responsible for discerning between available samples of gene expression data is an important task in bioinformatics. Reducts, from rough sets theory, corresponding to a minimal set of essential genes for discerning samples, is an efficient tool for gene selection. Due to the compuational complexty of the existing reduct algoritms, feature ranking is usually used to narrow down gene space as the first step and top ranked genes are selected . In this paper,we define a novel certierion based on the expression level difference btween classes and contribution to classification of the gene for scoring genes and present a algorithm for generating all possible reduct from informative genes.The algorithm takes the whole attribute sets into account and find short reduct with a significant reduction in computational complexity. An exploration of this approach on benchmark gene expression data sets demonstrates that this approach is successful for selecting high discriminative genes and the classification accuracy is impressive.

  14. NASA's RPS Design Reference Mission Set for Solar System Exploration

    NASA Technical Reports Server (NTRS)

    Balint, Tibor S.

    2007-01-01

    NASA's 2006 Solar System Exploration (SSE) Strategic Roadmap identified a set of proposed large Flagship, medium New Frontiers and small Discovery class missions, addressing key exploration objectives. These objectives respond to the recommendations by the National Research Council (NRC), reported in the SSE Decadal Survey. The SSE Roadmap is down-selected from an over-subscribed set of missions, called the SSE Design Reference Mission (DRM) set. Missions in the Flagship and New Frontiers classes can consider Radioisotope Power Systems (RPSs), while small Discovery class missions are not permitted to use them, due to cost constraints. In line with the SSE DRM set and the SSE Roadmap missions, the RPS DRM set represents a set of missions, which can be enabled or enhanced by RPS technologies. At present, NASA has proposed the development of two new types of RPSs. These are the Multi-Mission Radioisotope Thermoelectric Generator (MMRTG), with static power conversion; and the Stirling Radioisotope Generator (SRG), with dynamic conversion. Advanced RPSs, under consideration for possible development, aim to increase specific power levels. In effect, this would either increase electric power generation for the same amount of fuel, or reduce fuel requirements for the same power output, compared to the proposed MMRTG or SRG. Operating environments could also influence the design, such that an RPS on the proposed Titan Explorer would use smaller fins to minimize heat rejection in the extreme cold environment; while the Venus Mobile Explorer long-lived in-situ mission would require the development of a new RPS, in order to tolerate the extreme hot environment, and to simultaneously provide active cooling to the payload and other electric components. This paper discusses NASA's SSE RPS DRM set, in line with the SSE DRM set. It gives a qualitative assessment regarding the impact of various RPS technology and configuration options on potential mission architectures, which could support NASA's RPS technology development planning, and provide an understanding of fuel need trades over the next three decades.

  15. The clockwork supergravity

    NASA Astrophysics Data System (ADS)

    Kehagias, Alex; Riotto, Antonio

    2018-02-01

    We show that the minimal D = 5, N = 2 gauged supergravity set-up may encode naturally the recently proposed clockwork mechanism. The minimal embedding requires one vector multiplet in addition to the supergravity multiplet and the clockwork scalar is identified with the scalar in the vector multiplet. The scalar has a two-parameter potential and it can accommodate the clockwork, the Randall-Sundrum and a no-scale model with a flat potential, depending on the values of the parameters. The continuous clockwork background breaks half of the original supersymmetries, leaving a D = 4, N = 1 theory on the boundaries. We also show that the generated hierarchy by the clockwork is not exponential but rather power law. The reason is that four-dimensional Planck scale has a power-law dependence on the compactification radius, whereas the corresponding KK spectrum depends on the logarithm of the latter.

  16. Management of the orbital environment

    NASA Technical Reports Server (NTRS)

    Loftus, Joseph P., Jr.; Kessler, Donald J.; Anz-Meador, Phillip D.

    1991-01-01

    Data regarding orbital debris are presented to shed light on the requirements of environmental management in space, and strategies are given for active intervention and operational strategies. Debris are generated by inadvertent explosions of upper stages, intentional military explosions, and collisional breakups. Design and operation practices are set forth for minimizing debris generation and removing useless debris from orbit in the low-earth and geosynchronous orbits. Self-disposal options include propulsive maneuvers, drag-augmentation devices, and tether systems, and the drag devices are described as simple and passive. Active retrieval and disposition are considered, and the difficulty is examined of removing small debris. Active intervention techniques are required since pollution prevention is more effective than remediation for the problems of both earth and space.

  17. Test Scheduling for Core-Based SOCs Using Genetic Algorithm Based Heuristic Approach

    NASA Astrophysics Data System (ADS)

    Giri, Chandan; Sarkar, Soumojit; Chattopadhyay, Santanu

    This paper presents a Genetic algorithm (GA) based solution to co-optimize test scheduling and wrapper design for core based SOCs. Core testing solutions are generated as a set of wrapper configurations, represented as rectangles with width equal to the number of TAM (Test Access Mechanism) channels and height equal to the corresponding testing time. A locally optimal best-fit heuristic based bin packing algorithm has been used to determine placement of rectangles minimizing the overall test times, whereas, GA has been utilized to generate the sequence of rectangles to be considered for placement. Experimental result on ITC'02 benchmark SOCs shows that the proposed method provides better solutions compared to the recent works reported in the literature.

  18. Many Denjoy minimal sets for monotone recurrence relations

    NASA Astrophysics Data System (ADS)

    Wang, Ya-Nan; Qin, Wen-Xin

    2014-09-01

    We extend Mather's work (1985 Comment. Math. Helv. 60 508-57) to high-dimensional cylinder maps defined by monotone recurrence relations, e.g. the generalized Frenkel-Kontorova model with finite range interactions. We construct uncountably many Denjoy minimal sets provided that the Birkhoff minimizers with some irrational rotation number ω do not form a foliation.

  19. Constrained Multiobjective Biogeography Optimization Algorithm

    PubMed Central

    Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping

    2014-01-01

    Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591

  20. CIRCAL-2 - General-purpose on-line circuit design.

    NASA Technical Reports Server (NTRS)

    Dertouzos, M. L.; Jessel, G. P.; Stinger, J. R.

    1972-01-01

    CIRCAL-2 is a second-generation general-purpose on-line circuit-design program with the following main features: (1) multiple-analysis capability; (2) uniform and general data structures for handling text editing, network representations, and output results, regardless of analysis; (3) special techniques and structures for minimizing and controlling user-program interaction; (4) use of functionals for the description of hysteresis and heat effects; and (5) ability to define optimization procedures that 'replace' the user. The paper discusses the organization of CIRCAL-2, the aforementioned main features, and their consequences, such as a set of network elements and models general enough for most analyses and a set of functions tailored to circuit-design requirements. The presentation is descriptive, concentrating on conceptual rather than on program implementation details.

  1. A new approach to the convective parameterization of the regional atmospheric model BRAMS

    NASA Astrophysics Data System (ADS)

    Dos Santos, A. F.; Freitas, S. R.; de Campos Velho, H. F.; Luz, E. F.; Gan, M. A.; de Mattos, J. Z.; Grell, G. A.

    2013-05-01

    The summer characteristics of January 2010 was performed using the atmospheric model Brazilian developments on the Regional Atmospheric Modeling System (BRAMS). The convective parameterization scheme of Grell and Dévényi was used to represent clouds and their interaction with the large scale environment. As a result, the precipitation forecasts can be combined in several ways, generating a numerical representation of precipitation and atmospheric heating and moistening rates. The purpose of this study was to generate a set of weights to compute a best combination of the hypothesis of the convective scheme. It is an inverse problem of parameter estimation and the problem is solved as an optimization problem. To minimize the difference between observed data and forecasted precipitation, the objective function was computed with the quadratic difference between five simulated precipitation fields and observation. The precipitation field estimated by the Tropical Rainfall Measuring Mission satellite was used as observed data. Weights were obtained using the firefly algorithm and the mass fluxes of each closure of the convective scheme were weighted generating a new set of mass fluxes. The results indicated the better skill of the model with the new methodology compared with the old ensemble mean calculation.

  2. Can Geostatistical Models Represent Nature's Variability? An Analysis Using Flume Experiments

    NASA Astrophysics Data System (ADS)

    Scheidt, C.; Fernandes, A. M.; Paola, C.; Caers, J.

    2015-12-01

    The lack of understanding in the Earth's geological and physical processes governing sediment deposition render subsurface modeling subject to large uncertainty. Geostatistics is often used to model uncertainty because of its capability to stochastically generate spatially varying realizations of the subsurface. These methods can generate a range of realizations of a given pattern - but how representative are these of the full natural variability? And how can we identify the minimum set of images that represent this natural variability? Here we use this minimum set to define the geostatistical prior model: a set of training images that represent the range of patterns generated by autogenic variability in the sedimentary environment under study. The proper definition of the prior model is essential in capturing the variability of the depositional patterns. This work starts with a set of overhead images from an experimental basin that showed ongoing autogenic variability. We use the images to analyze the essential characteristics of this suite of patterns. In particular, our goal is to define a prior model (a minimal set of selected training images) such that geostatistical algorithms, when applied to this set, can reproduce the full measured variability. A necessary prerequisite is to define a measure of variability. In this study, we measure variability using a dissimilarity distance between the images. The distance indicates whether two snapshots contain similar depositional patterns. To reproduce the variability in the images, we apply an MPS algorithm to the set of selected snapshots of the sedimentary basin that serve as training images. The training images are chosen from among the initial set by using the distance measure to ensure that only dissimilar images are chosen. Preliminary investigations show that MPS can reproduce fairly accurately the natural variability of the experimental depositional system. Furthermore, the selected training images provide process information. They fall into three basic patterns: a channelized end member, a sheet flow end member, and one intermediate case. These represent the continuum between autogenic bypass or erosion, and net deposition.

  3. Automated planning for intelligent machines in energy-related applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weisbin, C.R.; de Saussure, G.; Barhen, J.

    1984-01-01

    This paper discusses the current activities of the Center for Engineering Systems Advanced Research (CESAR) program related to plan generation and execution by an intelligent machine. The system architecture for the CESAR mobile robot (named HERMIES-1) is described. The minimal cut-set approach is developed to reduce the tree search time of conventional backward chaining planning techniques. Finally, a real-time concept of an Intelligent Machine Operating System is presented in which planning and reasoning is embedded in a system for resource allocation and process management.

  4. System-on-Chip Data Processing and Data Handling Spaceflight Electronics

    NASA Technical Reports Server (NTRS)

    Kleyner, I.; Katz, R.; Tiggeler, H.

    1999-01-01

    This paper presents a methodology and a tool set which implements automated generation of moderate-size blocks of customized intellectual property (IP), thus effectively reusing prior work and minimizing the labor intensive, error-prone parts of the design process. Customization of components allows for optimization for smaller area and lower power consumption, which is an important factor given the limitations of resources available in radiation-hardened devices. The effects of variations in HDL coding style on the efficiency of synthesized code for various commercial synthesis tools are also discussed.

  5. Functional Bregman Divergence and Bayesian Estimation of Distributions (Preprint)

    DTIC Science & Technology

    2008-01-01

    shows that if the set of possible minimizers A includes EPF [F ], then g∗ = EPF [F ] minimizes the expectation of any Bregman divergence. Note the theorem...probability distribution PF defined over the set M. Let A be a set of functions that includes EPF [F ] if it exists. Suppose the function g∗ minimizes...the expected Bregman divergence between the random function F and any function g ∈ A such that g∗ = arg inf g∈A EPF [dφ(F, g)]. Then, if g∗ exists

  6. Minimization of Basis Risk in Parametric Earthquake Cat Bonds

    NASA Astrophysics Data System (ADS)

    Franco, G.

    2009-12-01

    A catastrophe -cat- bond is an instrument used by insurance and reinsurance companies, by governments or by groups of nations to cede catastrophic risk to the financial markets, which are capable of supplying cover for highly destructive events, surpassing the typical capacity of traditional reinsurance contracts. Parametric cat bonds, a specific type of cat bonds, use trigger mechanisms or indices that depend on physical event parameters published by respected third parties in order to determine whether a part or the entire bond principal is to be paid for a certain event. First generation cat bonds, or cat-in-a-box bonds, display a trigger mechanism that consists of a set of geographic zones in which certain conditions need to be met by an earthquake’s magnitude and depth in order to trigger payment of the bond principal. Second generation cat bonds use an index formulation that typically consists of a sum of products of a set of weights by a polynomial function of the ground motion variables reported by a geographically distributed seismic network. These instruments are especially appealing to developing countries with incipient insurance industries wishing to cede catastrophic losses to the financial markets because the payment trigger mechanism is transparent and does not involve the parties ceding or accepting the risk, significantly reducing moral hazard. In order to be successful in the market, however, parametric cat bonds have typically been required to specify relatively simple trigger conditions. The consequence of such simplifications is the increase of basis risk. This risk represents the possibility that the trigger mechanism fails to accurately capture the actual losses of a catastrophic event, namely that it does not trigger for a highly destructive event or vice versa, that a payment of the bond principal is caused by an event that produced insignificant losses. The first case disfavors the sponsor who was seeking cover for its losses while the second disfavors the investor who loses part of the investment without a reasonable cause. A streamlined and fairly automated methodology has been developed to design parametric triggers that minimize the basis risk while still maintaining their level of relative simplicity. Basis risk is minimized in both, first and second generation, parametric cat bonds through an optimization procedure that aims to find the most appropriate magnitude thresholds, geographic zones, and weight index values. Sensitivity analyses to different design assumptions show that first generation cat bonds are typically affected by a large negative basis risk, namely the risk that the bond will not trigger for events within the risk level transferred, unless a sufficiently small geographic resolution is selected to define the trigger zones. Second generation cat bonds in contrast display a bias towards negative or positive basis risk depending on the degree of the polynomial used as well as on other design parameters. Two examples are presented, the construction of a first generation parametric trigger mechanism for Costa Rica and the design of a second generation parametric index for Japan.

  7. Development of multiplex microsatellite PCR panels for the seagrass Thalassia hemprichii (Hydrocharitaceae).

    PubMed

    van Dijk, Kor-Jent; Mellors, Jane; Waycott, Michelle

    2014-11-01

    New microsatellites were developed for the seagrass Thalassia hemprichii (Hydrocharitaceae), a long-lived seagrass species that is found throughout the shallow waters of tropical and subtropical Indo-West Pacific. Three multiplex PCR panels were designed utilizing new and previously developed markers, resulting in a toolkit for generating a 16-locus genotype. • Through the use of microsatellite enrichment and next-generation sequencing, 16 new, validated, polymorphic microsatellite markers were isolated. Diversity was between two and four alleles per locus totaling 36 alleles. These markers, plus previously developed microsatellite markers for T. hemprichii and T. testudinum, were tested for suitability in multiplex PCR panels. • The generation of an easily replicated suite of multiplex panels of codominant molecular markers will allow for high-resolution and detailed genetic structure analysis and clonality assessment with minimal genotyping costs. We suggest the establishment of a T. hemprichii primer convention for the unification of future data sets.

  8. Minimal measures for Euler-Lagrange flows on finite covering spaces

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Xia, Zhihong

    2016-12-01

    In this paper we study the minimal measures for positive definite Lagrangian systems on compact manifolds. We are particularly interested in manifolds with more complicated fundamental groups. Mather’s theory classifies the minimal or action-minimizing measures according to the first (co-)homology group of a given manifold. We extend Mather’s notion of minimal measures to a larger class for compact manifolds with non-commutative fundamental groups, and use finite coverings to study the structure of these extended minimal measures. We also define action-minimizers and minimal measures in the homotopical sense. Our program is to study the structure of homotopical minimal measures by considering Mather’s minimal measures on finite covering spaces. Our goal is to show that, in general, manifolds with a non-commutative fundamental group have a richer set of minimal measures, hence a richer dynamical structure. As an example, we study the geodesic flow on surfaces of higher genus. Indeed, by going to the finite covering spaces, the set of minimal measures is much larger and more interesting.

  9. Prediction of protein tertiary structure to low resolution: performance for a large and structurally diverse test set.

    PubMed

    Eyrich, V A; Standley, D M; Friesner, R A

    1999-05-14

    We report the tertiary structure predictions for 95 proteins ranging in size from 17 to 160 residues starting from known secondary structure. Predictions are obtained from global minimization of an empirical potential function followed by the application of a refined atomic overlap potential. The minimization strategy employed represents a variant of the Monte Carlo plus minimization scheme of Li and Scheraga applied to a reduced model of the protein chain. For all of the cases except beta-proteins larger than 75 residues, a native-like structure, usually 4-6 A root-mean-square deviation from the native, is located. For beta-proteins larger than 75 residues, the energy gap between native-like structures and the lowest energy structures produced in the simulation is large, so that low RMSD structures are not generated starting from an unfolded state. This is attributed to the lack of an explicit hydrogen bond term in the potential function, which we hypothesize is necessary to stabilize large assemblies of beta-strands. Copyright 1999 Academic Press.

  10. BioPCD - A Language for GUI Development Requiring a Minimal Skill Set

    PubMed Central

    Alvare, Graham GM; Roche-Lima, Abiel; Fristensky, Brian

    2016-01-01

    BioPCD is a new language whose purpose is to simplify the creation of Graphical User Interfaces (GUIs) by biologists with minimal programming skills. The first step in developing BioPCD was to create a minimal superset of the language referred to as PCD (Pythonesque Command Description). PCD defines the core of terminals and high-level nonterminals required to describe data of almost any type. BioPCD adds to PCD the constructs necessary to describe GUI components and the syntax for executing system commands. BioPCD is implemented using JavaCC to convert the grammar into code. BioPCD is designed to be terse and readable and simple enough to be learned by copying and modifying existing BioPCD files. We demonstrate that BioPCD can easily be used to generate GUIs for existing command line programs. Although BioPCD was designed to make it easier to run bioinformatics programs, it could be used in any domain in which many useful command line programs exist that do not have GUI interfaces. PMID:27818582

  11. CMPF: class-switching minimized pathfinding in metabolic networks.

    PubMed

    Lim, Kevin; Wong, Limsoon

    2012-01-01

    The metabolic network is an aggregation of enzyme catalyzed reactions that converts one compound to another. Paths in a metabolic network are a sequence of enzymes that describe how a chemical compound of interest can be produced in a biological system. As the number of such paths is quite large, many methods have been developed to score paths so that the k-shortest paths represent the set of paths that are biologically meaningful or efficient. However, these approaches do not consider whether the sequence of enzymes can be manufactured in the same pathway/species/localization. As a result, a predicted sequence might consist of groups of enzymes that operate in distinct pathway/species/localization and may not truly reflect the events occurring within cell. We propose a path weighting method CMPF (Class-switching Minimized Pathfinder) to search for routes in a metabolic network which minimizes pathway switching. In biological terms, a pathway is a series of chemical reactions which define a specific function (e.g. glycolysis). We conjecture that routes that cross many pathways are inefficient since different pathways define different metabolic functions. In addition, native routes are also well characterized within pathways, suggesting that reasonable paths should not involve too many pathway switches. Our method can be generalized when reactions participate in a class set (e.g., pathways, species or cellular localization) so that the paths predicted have minimal class crossings. We show that our method generates k-paths that involve the least number of class switching. In addition, we also show that native paths are recoverable and alternative paths deviates less from native paths compared to other methods. This suggests that paths ranked by our method could be a way to predict paths that are likely to occur in biological systems.

  12. EPA issues interim final waste minimization guidance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergeson, L.L.

    1993-08-01

    The U.S. Environmental Protection Agency (EPA) has released a new and detailed interim final guidance to assist hazardous waste generators in certifying they have a waste minimization program in place under the Resource Conservation and Recovery Act (RCRA). EPA's guidance identifies the basic elements of a waste minimization program in place that, if present, will allow people to certify they have implemented a program to reduce the volume and toxicity of hazardous waste to the extent economically practical. The guidance is directly applicable to generators of 1000 or more kilograms per month of hazardous waste, or large-quantity generators, and tomore » owners and operators of hazardous waste treatment, storage or disposal facilities who manage their own hazardous waste on site. Small-quantity generators that generate more than 100 kilograms, but less than 1,000 kilograms, per month of hazardous waste are not subject to the same program in place certification requirement. Rather, they must certify on their manifests that they have made a good faith effort to minimize their waste generation.« less

  13. Tuning rules for robust FOPID controllers based on multi-objective optimization with FOPDT models.

    PubMed

    Sánchez, Helem Sabina; Padula, Fabrizio; Visioli, Antonio; Vilanova, Ramon

    2017-01-01

    In this paper a set of optimally balanced tuning rules for fractional-order proportional-integral-derivative controllers is proposed. The control problem of minimizing at once the integrated absolute error for both the set-point and the load disturbance responses is addressed. The control problem is stated as a multi-objective optimization problem where a first-order-plus-dead-time process model subject to a robustness, maximum sensitivity based, constraint has been considered. A set of Pareto optimal solutions is obtained for different normalized dead times and then the optimal balance between the competing objectives is obtained by choosing the Nash solution among the Pareto-optimal ones. A curve fitting procedure has then been applied in order to generate suitable tuning rules. Several simulation results show the effectiveness of the proposed approach. Copyright © 2016. Published by Elsevier Ltd.

  14. Expansion of 50 CAG/CTG repeats excluded in schizophrenia by application of a highly efficient approach using repeat expansion detection and a PCR screening set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowen, T.; Guy, C.; Speight, G.

    Studies of the transmission of schizophrenia in families with affected members in several generations have suggested that an expanded trinucleotide repeat mechanism may contribute to the genetic inheritance of this disorder. Using repeat expansion detection (RED), we and others have previously found that the distribution of CAG/CTG repeat size is larger in patients with schizophrenia than in controls. In an attempt to identify the specific expanded CAG/CTG locus or loci associated with schizophrenia, we have now used an approach based on a CAG/CTG PCR screening set combined with RED data. This has allowed us to minimize genotyping while excluding 43more » polymorphic autosomal loci and 7 X-chromosomal loci from the screening set as candidates for expansion in schizophrenia with a very high degree of confidence. 18 refs., 1 tab.« less

  15. Mining Stable Roles in RBAC

    NASA Astrophysics Data System (ADS)

    Colantonio, Alessandro; di Pietro, Roberto; Ocello, Alberto; Verde, Nino Vincenzo

    In this paper we address the problem of generating a candidate role-set for an RBAC configuration that enjoys the following two key features: it minimizes the administration cost; and, it is a stable candidate role-set. To achieve these goals, we implement a three steps methodology: first, we associate a weight to roles; second, we identify and remove the user-permission assignments that cannot belong to a role that have a weight exceeding a given threshold; third, we restrict the problem of finding a candidate role-set for the given system configuration using only the user-permission assignments that have not been removed in the second step—that is, user-permission assignments that belong to roles with a weight exceeding the given threshold. We formally show—proof of our results are rooted in graph theory—that this methodology achieves the intended goals. Finally, we discuss practical applications of our approach to the role mining problem.

  16. right-sized dimple evaluator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodriguez, Sal

    2017-08-24

    The code (aka computer program written as a Matlab script) uses a unique set of n independent equations to solve for n turbulence variables. The code requires the input of a characteristic dimension, a characteristic fluid velocity, the fluid dynamic viscosity, and the fluid density. Most importantly, the code estimates the size of three key turbulent eddies: Kolmogorov, Taylor, and integral. Based on the eddy sizes, dimples dimensions are prescribed such that the key eddies (principally Taylor, and sometimes Kolmogorov), can be generated by the dimple rim and flow unimpeded through the dimple’s concave cavity. It is hypothesized that turbulentmore » eddies are generated by the dimple rim at the dimple-surface interface. The newly-generated eddies in turn entrain the movement of surrounding regions of fluid, creating more mixing. The eddies also generate lift near the wall surrounding the dimple, as they accelerate and reduce pressure in the regions near and at the dimple cavity, thereby minimizing the fluid drag.« less

  17. A method for generating reliable atomistic models of amorphous polymers based on a random search of energy minima

    NASA Astrophysics Data System (ADS)

    Curcó, David; Casanovas, Jordi; Roca, Marc; Alemán, Carlos

    2005-07-01

    A method for generating atomistic models of dense amorphous polymers is presented. The method is organized in a two-steps procedure. First, structures are generated using an algorithm that minimizes the torsional strain. After this, a relaxation algorithm is applied to minimize the non-bonding interactions. Two alternative relaxation methods, which are based simple minimization and Concerted Rotation techniques, have been implemented. The performance of the method has been checked by simulating polyethylene, polypropylene, nylon 6, poly(L,D-lactic acid) and polyglycolic acid.

  18. Novel approach for tomographic reconstruction of gas concentration distributions in air: Use of smooth basis functions and simulated annealing

    NASA Astrophysics Data System (ADS)

    Drescher, A. C.; Gadgil, A. J.; Price, P. N.; Nazaroff, W. W.

    Optical remote sensing and iterative computed tomography (CT) can be applied to measure the spatial distribution of gaseous pollutant concentrations. We conducted chamber experiments to test this combination of techniques using an open path Fourier transform infrared spectrometer (OP-FTIR) and a standard algebraic reconstruction technique (ART). Although ART converged to solutions that showed excellent agreement with the measured ray-integral concentrations, the solutions were inconsistent with simultaneously gathered point-sample concentration measurements. A new CT method was developed that combines (1) the superposition of bivariate Gaussians to represent the concentration distribution and (2) a simulated annealing minimization routine to find the parameters of the Gaussian basis functions that result in the best fit to the ray-integral concentration data. This method, named smooth basis function minimization (SBFM), generated reconstructions that agreed well, both qualitatively and quantitatively, with the concentration profiles generated from point sampling. We present an analysis of two sets of experimental data that compares the performance of ART and SBFM. We conclude that SBFM is a superior CT reconstruction method for practical indoor and outdoor air monitoring applications.

  19. Reconstruction of a Bacterial Genome from DNA Cassettes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christopher Dupont; John Glass; Laura Sheahan

    2011-12-31

    This basic research program comprised two major areas: (1) acquisition and analysis of marine microbial metagenomic data and development of genomic analysis tools for broad, external community use; (2) development of a minimal bacterial genome. Our Marine Metagenomic Diversity effort generated and analyzed shotgun sequencing data from microbial communities sampled from over 250 sites around the world. About 40% of the 26 Gbp of sequence data has been made publicly available to date with a complete release anticipated in six months. Our results and those mining the deposited data have revealed a vast diversity of genes coding for critical metabolicmore » processes whose phylogenetic and geographic distributions will enable a deeper understanding of carbon and nutrient cycling, microbial ecology, and rapid rate evolutionary processes such as horizontal gene transfer by viruses and plasmids. A global assembly of the generated dataset resulted in a massive set (5Gbp) of genome fragments that provide context to the majority of the generated data that originated from uncultivated organisms. Our Synthetic Biology team has made significant progress towards the goal of synthesizing a minimal mycoplasma genome that will have all of the machinery for independent life. This project, once completed, will provide fundamentally new knowledge about requirements for microbial life and help to lay a basic research foundation for developing microbiological approaches to bioenergy.« less

  20. Towards the automated analysis and database development of defibrillator data from cardiac arrest.

    PubMed

    Eftestøl, Trygve; Sherman, Lawrence D

    2014-01-01

    During resuscitation of cardiac arrest victims a variety of information in electronic format is recorded as part of the documentation of the patient care contact and in order to be provided for case review for quality improvement. Such review requires considerable effort and resources. There is also the problem of interobserver effects. We show that it is possible to efficiently analyze resuscitation episodes automatically using a minimal set of the available information. A minimal set of variables is defined which describe therapeutic events (compression sequences and defibrillations) and corresponding patient response events (annotated rhythm transitions). From this a state sequence representation of the resuscitation episode is constructed and an algorithm is developed for reasoning with this representation and extract review variables automatically. As a case study, the method is applied to the data abstraction process used in the King County EMS. The automatically generated variables are compared to the original ones with accuracies ≥ 90% for 18 variables and ≥ 85% for the remaining four variables. It is possible to use the information present in the CPR process data recorded by the AED along with rhythm and chest compression annotations to automate the episode review.

  1. Ontologies of life: From thermodynamics to teleonomics. Comment on "Answering Schrödinger's question: A free-energy formulation" by Maxwell James Désormeau Ramstead et al.

    NASA Astrophysics Data System (ADS)

    Kirmayer, Laurence J.

    2018-03-01

    In a far-reaching essay, Ramstead and colleagues [1] offer an answer to Schrodinger's question "What is life?" [2] framed in terms of a thermodynamic/information-theoretic free energy principle. In short, "all biological systems instantiate a hierarchical generative model of the world that implicitly minimizes its internal entropy by minimizing free energy" [1]. This model generates dynamic stability-that is, a recurrent set of states that constitute a dynamic attractor. This aspect of their answer has much in common with earlier thermodynamic approaches, like that of Prigogine [3], and with the metabolic self-organization central to Maturana and Varela's notion of autopoiesis [4]. It contrasts with explanations of life that emphasize the mechanics of self-replication [5] or autocatalysis [6,7]. In this approach, there is something gained and something lost. Gained is an explanation and corresponding formalism of great generality. Lost (or at least obscured) is a way to understand the "teleonomics" [8], goal-directedness, purposiveness, or agency of living systems-arguably, precisely what makes us ascribe the quality of "being alive" to an organism. Free energy minimization may be a necessary condition for life, but it is not sufficient to characterize its goals, which vary widely and, at least at the level of individual organisms or populations, clearly can run counter to this principle for long stretches of time.

  2. Citizen science contributes to our knowledge of invasive plant species distributions

    USGS Publications Warehouse

    Crall, Alycia W.; Jarnevich, Catherine S.; Young, Nicholas E.; Panke, Brendon; Renz, Mark; Stohlgren, Thomas

    2015-01-01

    Citizen science is commonly cited as an effective approach to expand the scale of invasive species data collection and monitoring. However, researchers often hesitate to use these data due to concerns over data quality. In light of recent research on the quality of data collected by volunteers, we aimed to demonstrate the extent to which citizen science data can increase sampling coverage, fill gaps in species distributions, and improve habitat suitability models compared to professionally generated data sets used in isolation. We combined data sets from professionals and volunteers for five invasive plant species (Alliaria petiolata, Berberis thunbergii, Cirsium palustre, Pastinaca sativa, Polygonum cuspidatum) in portions of Wisconsin. Volunteers sampled counties not sampled by professionals for three of the five species. Volunteers also added presence locations within counties not included in professional data sets, especially in southern portions of the state where professional monitoring activities had been minimal. Volunteers made a significant contribution to the known distribution, environmental gradients sampled, and the habitat suitability of P. cuspidatum. Models generated with professional data sets for the other four species performed reasonably well according to AUC values (>0.76). The addition of volunteer data did not greatly change model performance (AUC > 0.79) but did change the suitability surface generated by the models, making them more realistic. Our findings underscore the need to merge data from multiple sources to improve knowledge of current species distributions, and to predict their movement under present and future environmental conditions. The efficiency and success of these approaches require that monitoring efforts involve multiple stakeholders in continuous collaboration via established monitoring networks.

  3. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the minimum amount of time. Given a list of numbers, try to find one or more solutions in which, if each number is compressed by use of the modulo function by some value, then a unique value is generated.

  4. Method for Correcting Control Surface Angle Measurements in Single Viewpoint Photogrammetry

    NASA Technical Reports Server (NTRS)

    Burner, Alpheus W. (Inventor); Barrows, Danny A. (Inventor)

    2006-01-01

    A method of determining a corrected control surface angle for use in single viewpoint photogrammetry to correct control surface angle measurements affected by wing bending. First and second visual targets are spaced apart &om one another on a control surface of an aircraft wing. The targets are positioned at a semispan distance along the aircraft wing. A reference target separation distance is determined using single viewpoint photogrammetry for a "wind off condition. An apparent target separation distance is then computed for "wind on." The difference between the reference and apparent target separation distances is minimized by recomputing the single viewpoint photogrammetric solution for incrementally changed values of target semispan distances. A final single viewpoint photogrammetric solution is then generated that uses the corrected semispan distance that produced the minimized difference between the reference and apparent target separation distances. The final single viewpoint photogrammetric solution set is used to determine the corrected control surface angle.

  5. An Automated and Minimally Invasive Tool for Generating Autologous Viable Epidermal Micrografts

    PubMed Central

    Osborne, Sandra N.; Schmidt, Marisa A.; Harper, John R.

    2016-01-01

    ABSTRACT OBJECTIVE: A new epidermal harvesting tool (CelluTome; Kinetic Concepts, Inc, San Antonio, Texas) created epidermal micrografts with minimal donor site damage, increased expansion ratios, and did not require the use of an operating room. The tool, which applies both heat and suction concurrently to normal skin, was used to produce epidermal micrografts that were assessed for uniform viability, donor-site healing, and discomfort during and after the epidermal harvesting procedure. DESIGN: This study was a prospective, noncomparative institutional review board–approved healthy human study to assess epidermal graft viability, donor-site morbidity, and patient experience. SETTING: These studies were conducted at the multispecialty research facility, Clinical Trials of Texas, Inc, San Antonio. PATIENTS: The participants were 15 healthy human volunteers. RESULTS: The average viability of epidermal micrografts was 99.5%. Skin assessment determined that 76% to 100% of the area of all donor sites was the same in appearance as the surrounding skin within 14 days after epidermal harvest. A mean pain of 1.3 (on a scale of 1 to 5) was reported throughout the harvesting process. CONCLUSIONS: Use of this automated, minimally invasive harvesting system provided a simple, low-cost method of producing uniformly viable autologous epidermal micrografts with minimal patient discomfort and superficial donor-site wound healing within 2 weeks. PMID:26765157

  6. A Hybrid Parachute Simulation Environment for the Orion Parachute Development Project

    NASA Technical Reports Server (NTRS)

    Moore, James W.

    2011-01-01

    A parachute simulation environment (PSE) has been developed that aims to take advantage of legacy parachute simulation codes and modern object-oriented programming techniques. This hybrid simulation environment provides the parachute analyst with a natural and intuitive way to construct simulation tasks while preserving the pedigree and authority of established parachute simulations. NASA currently employs four simulation tools for developing and analyzing air-drop tests performed by the CEV Parachute Assembly System (CPAS) Project. These tools were developed at different times, in different languages, and with different capabilities in mind. As a result, each tool has a distinct interface and set of inputs and outputs. However, regardless of the simulation code that is most appropriate for the type of test, engineers typically perform similar tasks for each drop test such as prediction of loads, assessment of altitude, and sequencing of disreefs or cut-aways. An object-oriented approach to simulation configuration allows the analyst to choose models of real physical test articles (parachutes, vehicles, etc.) and sequence them to achieve the desired test conditions. Once configured, these objects are translated into traditional input lists and processed by the legacy simulation codes. This approach minimizes the number of sim inputs that the engineer must track while configuring an input file. An object oriented approach to simulation output allows a common set of post-processing functions to perform routine tasks such as plotting and timeline generation with minimal sensitivity to the simulation that generated the data. Flight test data may also be translated into the common output class to simplify test reconstruction and analysis.

  7. System for solving diagnosis and hitting set problems

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh (Inventor); Fijany, Amir (Inventor)

    2007-01-01

    The diagnosis problem arises when a system's actual behavior contradicts the expected behavior, thereby exhibiting symptoms (a collection of conflict sets). System diagnosis is then the task of identifying faulty components that are responsible for anomalous behavior. To solve the diagnosis problem, the present invention describes a method for finding the minimal set of faulty components (minimal diagnosis set) that explain the conflict sets. The method includes acts of creating a matrix of the collection of conflict sets, and then creating nodes from the matrix such that each node is a node in a search tree. A determination is made as to whether each node is a leaf node or has any children nodes. If any given node has children nodes, then the node is split until all nodes are leaf nodes. Information gathered from the leaf nodes is used to determine the minimal diagnosis set.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draelos, Timothy J.; Ballard, Sanford; Young, Christopher J.

    Given a set of observations within a specified time window, a fitness value is calculated at each grid node by summing station-specific conditional fitness values. Assuming each observation was generated by a refracted P wave, these values are proportional to the conditional probabilities that each observation was generated by a seismic event at the grid node. The node with highest fitness value is accepted as a hypothetical event location, subject to some minimal fitness value, and all arrivals within a longer time window consistent with that event are associated with it. During the association step, a variety of different phasesmore » are considered. In addition, once associated with an event, an arrival is removed from further consideration. While unassociated arrivals remain, the search for other events is repeated until none are identified.« less

  9. Implementation of pattern generation algorithm in forming Gilmore and Gomory model for two dimensional cutting stock problem

    NASA Astrophysics Data System (ADS)

    Octarina, Sisca; Radiana, Mutia; Bangun, Putra B. J.

    2018-01-01

    Two dimensional cutting stock problem (CSP) is a problem in determining the cutting pattern from a set of stock with standard length and width to fulfill the demand of items. Cutting patterns were determined in order to minimize the usage of stock. This research implemented pattern generation algorithm to formulate Gilmore and Gomory model of two dimensional CSP. The constraints of Gilmore and Gomory model was performed to assure the strips which cut in the first stage will be used in the second stage. Branch and Cut method was used to obtain the optimal solution. Based on the results, it found many patterns combination, if the optimal cutting patterns which correspond to the first stage were combined with the second stage.

  10. Image Navigation and Registration Performance Assessment Evaluation Tools for GOES-R ABI and GLM

    NASA Technical Reports Server (NTRS)

    Houchin, Scott; Porter, Brian; Graybill, Justin; Slingerland, Philip

    2017-01-01

    The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. This paper describes the software design and implementation of IPATS and provides preliminary test results.

  11. Global embedding of fibre inflation models

    NASA Astrophysics Data System (ADS)

    Cicoli, Michele; Muia, Francesco; Shukla, Pramod

    2016-11-01

    We present concrete embeddings of fibre inflation models in globally consistent type IIB Calabi-Yau orientifolds with closed string moduli stabilisation. After performing a systematic search through the existing list of toric Calabi-Yau manifolds, we find several examples that reproduce the minimal setup to embed fibre inflation models. This involves Calabi-Yau manifolds with h 1,1 = 3 which are K3 fibrations over a ℙ1 base with an additional shrinkable rigid divisor. We then provide different consistent choices of the underlying brane set-up which generate a non-perturbative superpotential suitable for moduli stabilisation and string loop corrections with the correct form to drive inflation. For each Calabi-Yau orientifold setting, we also compute the effect of higher derivative contributions and study their influence on the inflationary dynamics.

  12. Aldehyde levels in e-cigarette aerosol: Findings from a replication study and from use of a new-generation device.

    PubMed

    Farsalinos, Konstantinos E; Kistler, Kurt A; Pennington, Alexander; Spyrou, Alketa; Kouretas, Dimitris; Gillman, Gene

    2018-01-01

    A recent study identified high aldehyde emissions from e-cigarettes (ECs), that when converted to reasonable daily human EC liquid consumption, 5 g/day, gave formaldehyde exposure equivalent to 604-3257 tobacco cigarettes. We replicated this study and also tested a new-generation atomizer under verified realistic (no dry puff) conditions. CE4v2 atomizers were tested at 3.8 V and 4.8 V, and a Nautilus Mini atomizer was tested at 9.0 W and 13.5 W. All measurements were performed in a laboratory ISO-accredited for EC aerosol collection and aldehyde measurements. CE4v2 generated dry puffs at both voltage settings. Formaldehyde levels were >10-fold lower, acetaldehyde 6-9-fold lower and acrolein 16-26-fold lower than reported in the previous study. Nautilus Mini did not generate dry puffs, and minimal aldehydes were emitted despite >100% higher aerosol production per puff compared to CE4v2 (formaldehyde: 16.7 and 16.5 μg/g; acetaldehyde: 9.6 and 10.3 μg/g; acrolein: 8.6 and 11.7 μg/g at 9.0 W and 13.5 W, respectively). EC liquid consumption of 5 g/day reduces aldehyde exposure by 94.4-99.8% compared to smoking 20 tobacco cigarettes. Checking for dry puffs is essential for EC emission testing. Under realistic conditions, new-generation ECs emit minimal aldehydes/g liquid at both low and high power. Validated methods should be used when analyzing EC aerosol. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. CUTSETS - MINIMAL CUT SET CALCULATION FOR DIGRAPH AND FAULT TREE RELIABILITY MODELS

    NASA Technical Reports Server (NTRS)

    Iverson, D. L.

    1994-01-01

    Fault tree and digraph models are frequently used for system failure analysis. Both type of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Fault trees must have a tree structure and do not allow cycles or loops in the graph. Digraphs allow any pattern of interconnection between loops in the graphs. A common operation performed on digraph and fault tree models is the calculation of minimal cut sets. A cut set is a set of basic failures that could cause a given target failure event to occur. A minimal cut set for a target event node in a fault tree or digraph is any cut set for the node with the property that if any one of the failures in the set is removed, the occurrence of the other failures in the set will not cause the target failure event. CUTSETS will identify all the minimal cut sets for a given node. The CUTSETS package contains programs that solve for minimal cut sets of fault trees and digraphs using object-oriented programming techniques. These cut set codes can be used to solve graph models for reliability analysis and identify potential single point failures in a modeled system. The fault tree minimal cut set code reads in a fault tree model input file with each node listed in a text format. In the input file the user specifies a top node of the fault tree and a maximum cut set size to be calculated. CUTSETS will find minimal sets of basic events which would cause the failure at the output of a given fault tree gate. The program can find all the minimal cut sets of a node, or minimal cut sets up to a specified size. The algorithm performs a recursive top down parse of the fault tree, starting at the specified top node, and combines the cut sets of each child node into sets of basic event failures that would cause the failure event at the output of that gate. Minimal cut set solutions can be found for all nodes in the fault tree or just for the top node. The digraph cut set code uses the same techniques as the fault tree cut set code, except it includes all upstream digraph nodes in the cut sets for a given node and checks for cycles in the digraph during the solution process. CUTSETS solves for specified nodes and will not automatically solve for all upstream digraph nodes. The cut sets will be output as a text file. CUTSETS includes a utility program that will convert the popular COD format digraph model description files into text input files suitable for use with the CUTSETS programs. FEAT (MSC-21873) and FIRM (MSC-21860) available from COSMIC are examples of programs that produce COD format digraph model description files that may be converted for use with the CUTSETS programs. CUTSETS is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. CUTSETS is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is included on the distribution medium. Sun and SunOS are trademarks of Sun Microsystems, Inc. DEC, DeCstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc.

  14. Effective Techniques for Augmenting Heat Transfer: An Application of Entropy Generation Minimization Principles.

    DTIC Science & Technology

    1980-12-01

    augmentation techniques, entropy generation, irreversibility, exergy . 20. ABSTRACT (Continue on rovers. side If necessary and Identify by block number...35 3.5 Internally finned tubes ...... ................. .. 37 3.6 Internally roughened tubes ..... ............... . 41 3.7 Other heat transfer...irreversibility and entropy generation as fundamental criterion for evaluating and, eventually, minimizing the waste of usable energy ( exergy ) in energy

  15. Using Chinese Version of MYMOP in Chinese Medicine Evaluation: Validity, Responsiveness and Minimally Important Change

    PubMed Central

    2010-01-01

    Background Measure Yourself Medical Outcome Profile (MYMOP) is a patient generated outcome instrument applicable in the evaluation of both allopathic and complementary medicine treatment. This study aims to adapt MYMOP into Chinese, and to assess its validity, responsiveness and minimally important change values in a sample of patients using Chinese medicine (CM) services. Methods A Chinese version of MYMOP (CMYMOP) is developed by forward-backward-forward translation strategy, expert panel assessment and pilot testing amongst patients. 272 patients aged 18 or above with subjective symptoms in the past 2 weeks were recruited at a CM clinic, and were invited to complete a set of questionnaire containing CMYMOP and SF-36. Follow ups were performed at 2nd and 4th week after consultation, using the same set of questionnaire plus a global rating of change question. Criterion validity of CMYMOP was assessed by its correlation with SF-36 at baseline, and responsiveness was evaluated by calculating the Cohen effect size (ES) of change at two follow ups. Minimally important difference (MID) values were estimated via anchor based method, while minimally detectable difference (MDC) figures were calculated by distribution based method. Results Criterion validity of CMYMOP was demonstrated by negative correlation between CMYMOP Profile scores and all SF-36 domain and summary scores at baseline. For responsiveness between baseline and 4th week follow up, ES of CMYMOP Symptom 1, Activity and Profile reached the moderate change threshold (ES>0.5), while Symptom 2 and Wellbeing reached the weak change threshold (ES>0.2). None of the SF-36 scores reached the moderate change threshold, implying CMYMOP's stronger responsiveness in CM setting. At 2nd week follow up, MID values for Symptom 1, Symptom 2, Wellbeing and Profile items were 0.894, 0.580, 0.263 and 0.516 respectively. For Activity item, MDC figure of 0.808 was adopted to estimate MID. Conclusions The findings support the validity and responsiveness of CMYMOP for capturing patient centred clinical changes within 2 weeks in a CM clinical setting. Further researches are warranted (1) to estimate Activity item MID, (2) to assess the test-retest reliability of CMYMOP, and (3) to perform further MID evaluation using multiple, item specific anchor questions. PMID:20920284

  16. Towards sets of hazardous waste indicators. Essential tools for modern industrial management.

    PubMed

    Peterson, Peter J; Granados, Asa

    2002-01-01

    Decision-makers require useful tools, such as indicators, to help them make environmentally sound decisions leading to effective management of hazardous wastes. Four hazardous waste indicators are being tested for such a purpose by several countries within the Sustainable Development Indicator Programme of the United Nations Commission for Sustainable Development. However, these indicators only address the 'down-stream' end-of-pipe industrial situation. More creative thinking is clearly needed to develop a wider range of indicators that not only reflects all aspects of industrial production that generates hazardous waste but considers socio-economic implications of the waste as well. Sets of useful and innovative indicators are proposed that could be applied to the emerging paradigm shift away from conventional end-of-pipe management actions and towards preventive strategies that are being increasingly adopted by industry often in association with local and national governments. A methodological and conceptual framework for the development of a core-set of hazardous waste indicators has been developed. Some of the indicator sets outlined quantify preventive waste management strategies (including indicators for cleaner production, hazardous waste reduction/minimization and life cycle analysis), whilst other sets address proactive strategies (including changes in production and consumption patterns, eco-efficiency, eco-intensity and resource productivity). Indicators for quantifying transport of hazardous wastes are also described. It was concluded that a number of the indicators proposed could now be usefully implemented as management tools using existing industrial and economic data. As cleaner production technologies and waste minimization approaches are more widely deployed, and industry integrates environmental concerns at all levels of decision-making, it is expected that the necessary data for construction of the remaining indicators will soon become available.

  17. Combined Simulated Annealing and Genetic Algorithm Approach to Bus Network Design

    NASA Astrophysics Data System (ADS)

    Liu, Li; Olszewski, Piotr; Goh, Pong-Chai

    A new method - combined simulated annealing (SA) and genetic algorithm (GA) approach is proposed to solve the problem of bus route design and frequency setting for a given road network with fixed bus stop locations and fixed travel demand. The method involves two steps: a set of candidate routes is generated first and then the best subset of these routes is selected by the combined SA and GA procedure. SA is the main process to search for a better solution to minimize the total system cost, comprising user and operator costs. GA is used as a sub-process to generate new solutions. Bus demand assignment on two alternative paths is performed at the solution evaluation stage. The method was implemented on four theoretical grid networks of different size and a benchmark network. Several GA operators (crossover and mutation) were utilized and tested for their effectiveness. The results show that the proposed method can efficiently converge to the optimal solution on a small network but computation time increases significantly with network size. The method can also be used for other transport operation management problems.

  18. BIPAD: A web server for modeling bipartite sequence elements

    PubMed Central

    Bi, Chengpeng; Rogan, Peter K

    2006-01-01

    Background Many dimeric protein complexes bind cooperatively to families of bipartite nucleic acid sequence elements, which consist of pairs of conserved half-site sequences separated by intervening distances that vary among individual sites. Results We introduce the Bipad Server [1], a web interface to predict sequence elements embedded within unaligned sequences. Either a bipartite model, consisting of a pair of one-block position weight matrices (PWM's) with a gap distribution, or a single PWM matrix for contiguous single block motifs may be produced. The Bipad program performs multiple local alignment by entropy minimization and cyclic refinement using a stochastic greedy search strategy. The best models are refined by maximizing incremental information contents among a set of potential models with varying half site and gap lengths. Conclusion The web service generates information positional weight matrices, identifies binding site motifs, graphically represents the set of discovered elements as a sequence logo, and depicts the gap distribution as a histogram. Server performance was evaluated by generating a collection of bipartite models for distinct DNA binding proteins. PMID:16503993

  19. Exploring metabolic pathways in genome-scale networks via generating flux modes.

    PubMed

    Rezola, A; de Figueiredo, L F; Brock, M; Pey, J; Podhorski, A; Wittmann, C; Schuster, S; Bockmayr, A; Planes, F J

    2011-02-15

    The reconstruction of metabolic networks at the genome scale has allowed the analysis of metabolic pathways at an unprecedented level of complexity. Elementary flux modes (EFMs) are an appropriate concept for such analysis. However, their number grows in a combinatorial fashion as the size of the metabolic network increases, which renders the application of EFMs approach to large metabolic networks difficult. Novel methods are expected to deal with such complexity. In this article, we present a novel optimization-based method for determining a minimal generating set of EFMs, i.e. a convex basis. We show that a subset of elements of this convex basis can be effectively computed even in large metabolic networks. Our method was applied to examine the structure of pathways producing lysine in Escherichia coli. We obtained a more varied and informative set of pathways in comparison with existing methods. In addition, an alternative pathway to produce lysine was identified using a detour via propionyl-CoA, which shows the predictive power of our novel approach. The source code in C++ is available upon request.

  20. In Vitro Surfactant and Perfluorocarbon Aerosol Deposition in a Neonatal Physical Model of the Upper Conducting Airways

    PubMed Central

    Goikoetxea, Estibalitz; Murgia, Xabier; Serna-Grande, Pablo; Valls-i-Soler, Adolf; Rey-Santano, Carmen; Rivas, Alejandro; Antón, Raúl; Basterretxea, Francisco J.; Miñambres, Lorena; Méndez, Estíbaliz; Lopez-Arraiza, Alberto; Larrabe-Barrena, Juan Luis; Gomez-Solaetxe, Miguel Angel

    2014-01-01

    Objective Aerosol delivery holds potential to release surfactant or perfluorocarbon (PFC) to the lungs of neonates with respiratory distress syndrome with minimal airway manipulation. Nevertheless, lung deposition in neonates tends to be very low due to extremely low lung volumes, narrow airways and high respiratory rates. In the present study, the feasibility of enhancing lung deposition by intracorporeal delivery of aerosols was investigated using a physical model of neonatal conducting airways. Methods The main characteristics of the surfactant and PFC aerosols produced by a nebulization system, including the distal air pressure and air flow rate, liquid flow rate and mass median aerodynamic diameter (MMAD), were measured at different driving pressures (4–7 bar). Then, a three-dimensional model of the upper conducting airways of a neonate was manufactured by rapid prototyping and a deposition study was conducted. Results The nebulization system produced relatively large amounts of aerosol ranging between 0.3±0.0 ml/min for surfactant at a driving pressure of 4 bar, and 2.0±0.1 ml/min for distilled water (H2Od) at 6 bar, with MMADs between 2.61±0.1 µm for PFD at 7 bar and 10.18±0.4 µm for FC-75 at 6 bar. The deposition study showed that for surfactant and H2Od aerosols, the highest percentage of the aerosolized mass (∼65%) was collected beyond the third generation of branching in the airway model. The use of this delivery system in combination with continuous positive airway pressure set at 5 cmH2O only increased total airway pressure by 1.59 cmH2O at the highest driving pressure (7 bar). Conclusion This aerosol generating system has the potential to deliver relatively large amounts of surfactant and PFC beyond the third generation of branching in a neonatal airway model with minimal alteration of pre-set respiratory support. PMID:25211475

  1. Learning from physics-based earthquake simulators: a minimal approach

    NASA Astrophysics Data System (ADS)

    Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele

    2017-04-01

    Physics-based earthquake simulators are aimed to generate synthetic seismic catalogs of arbitrary length, accounting for fault interaction, elastic rebound, realistic fault networks, and some simple earthquake nucleation process like rate and state friction. Through comparison of synthetic and real catalogs seismologists can get insights on the earthquake occurrence process. Moreover earthquake simulators can be used to to infer some aspects of the statistical behavior of earthquakes within the simulated region, by analyzing timescales not accessible through observations. The develoment of earthquake simulators is commonly led by the approach "the more physics, the better", pushing seismologists to go towards simulators more earth-like. However, despite the immediate attractiveness, we argue that this kind of approach makes more and more difficult to understand which physical parameters are really relevant to describe the features of the seismic catalog at which we are interested. For this reason, here we take an opposite minimal approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple model may be more informative than a complex one for some specific scientific objectives, because it is more understandable. The model has three main components: the first one is a realistic tectonic setting, i.e., a fault dataset of California; the other two components are quantitative laws for earthquake generation on each single fault, and the Coulomb Failure Function for modeling fault interaction. The final goal of this work is twofold. On one hand, we aim to identify the minimum set of physical ingredients that can satisfactorily reproduce the features of the real seismic catalog, such as short-term seismic cluster, and to investigate on the hypothetical long-term behavior, and faults synchronization. On the other hand, we want to investigate the limits of predictability of the model itself.

  2. Determination of real machine-tool settings and minimization of real surface deviation by computerized inspection

    NASA Technical Reports Server (NTRS)

    Litvin, Faydor L.; Kuan, Chihping; Zhang, YI

    1991-01-01

    A numerical method is developed for the minimization of deviations of real tooth surfaces from the theoretical ones. The deviations are caused by errors of manufacturing, errors of installment of machine-tool settings and distortion of surfaces by heat-treatment. The deviations are determined by coordinate measurements of gear tooth surfaces. The minimization of deviations is based on the proper correction of initially applied machine-tool settings. The contents of accomplished research project cover the following topics: (1) Descriptions of the principle of coordinate measurements of gear tooth surfaces; (2) Deviation of theoretical tooth surfaces (with examples of surfaces of hypoid gears and references for spiral bevel gears); (3) Determination of the reference point and the grid; (4) Determination of the deviations of real tooth surfaces at the points of the grid; and (5) Determination of required corrections of machine-tool settings for minimization of deviations. The procedure for minimization of deviations is based on numerical solution of an overdetermined system of n linear equations in m unknowns (m much less than n ), where n is the number of points of measurements and m is the number of parameters of applied machine-tool settings to be corrected. The developed approach is illustrated with numerical examples.

  3. Computer method for identification of boiler transfer functions

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1971-01-01

    An iterative computer method is described for identifying boiler transfer functions using frequency response data. An objective penalized performance measure and a nonlinear minimization technique are used to cause the locus of points generated by a transfer function to resemble the locus of points obtained from frequency response measurements. Different transfer functions can be tried until a satisfactory empirical transfer function to the system is found. To illustrate the method, some examples and some results from a study of a set of data consisting of measurements of the inlet impedance of a single tube forced flow boiler with inserts are given.

  4. Damping treatment for an aircraft hard-mounted antenna system in a vibroacoustic environment

    NASA Astrophysics Data System (ADS)

    Tate, Ralph E.; Rupert, Carl L.

    1990-10-01

    This paper discusses the design, analysis, and testing of 'add-on' damping treatments for the Band 6, 7, 8 radar antenna packages that are hard-mounted on the B-1B Aft Equipment Bay (AEB) where equipment failures are routinely occurring during take-off maneuvers at maximum throttle settings. This damage results from the intense vibroacoustical environment generated by the three-stage afterburning engines. Failure rates have been sufficiently high to warrant a 'quick fix' involving damping treatments that can be installed in a short time with minimal modification to the existing structure.

  5. Waste minimization for commercial radioactive materials users generating low-level radioactive waste. Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, D.K.; Gitt, M.; Williams, G.A.

    1991-07-01

    The objective of this document is to provide a resource for all states and compact regions interested in promoting the minimization of low-level radioactive waste (LLW). This project was initiated by the Commonwealth of Massachusetts, and Massachusetts waste streams have been used as examples; however, the methods of analysis presented here are applicable to similar waste streams generated elsewhere. This document is a guide for states/compact regions to use in developing a system to evaluate and prioritize various waste minimization techniques in order to encourage individual radioactive materials users (LLW generators) to consider these techniques in their own independent evaluations.more » This review discusses the application of specific waste minimization techniques to waste streams characteristic of three categories of radioactive materials users: (1) industrial operations using radioactive materials in the manufacture of commercial products, (2) health care institutions, including hospitals and clinics, and (3) educational and research institutions. Massachusetts waste stream characterization data from key radioactive materials users in each category are used to illustrate the applicability of various minimization techniques. The utility group is not included because extensive information specific to this category of LLW generators is available in the literature.« less

  6. Waste minimization for commercial radioactive materials users generating low-level radioactive waste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, D.K.; Gitt, M.; Williams, G.A.

    1991-07-01

    The objective of this document is to provide a resource for all states and compact regions interested in promoting the minimization of low-level radioactive waste (LLW). This project was initiated by the Commonwealth of Massachusetts, and Massachusetts waste streams have been used as examples; however, the methods of analysis presented here are applicable to similar waste streams generated elsewhere. This document is a guide for states/compact regions to use in developing a system to evaluate and prioritize various waste minimization techniques in order to encourage individual radioactive materials users (LLW generators) to consider these techniques in their own independent evaluations.more » This review discusses the application of specific waste minimization techniques to waste streams characteristic of three categories of radioactive materials users: (1) industrial operations using radioactive materials in the manufacture of commercial products, (2) health care institutions, including hospitals and clinics, and (3) educational and research institutions. Massachusetts waste stream characterization data from key radioactive materials users in each category are used to illustrate the applicability of various minimization techniques. The utility group is not included because extensive information specific to this category of LLW generators is available in the literature.« less

  7. Adjustable electronic load-alarm relay

    DOEpatents

    Mason, Charles H.; Sitton, Roy S.

    1976-01-01

    This invention is an improved electronic alarm relay for monitoring the current drawn by an AC motor or other electrical load. The circuit is designed to measure the load with high accuracy and to have excellent alarm repeatability. Chattering and arcing of the relay contacts are minimal. The operator can adjust the set point easily and can re-set both the high and the low alarm points by means of one simple adjustment. The relay includes means for generating a signal voltage proportional to the motor current. In a preferred form of the invention a first operational amplifier is provided to generate a first constant reference voltage which is higher than a preselected value of the signal voltage. A second operational amplifier is provided to generate a second constant reference voltage which is lower than the aforementioned preselected value of the signal voltage. A circuit comprising a first resistor serially connected to a second resistor is connected across the outputs of the first and second amplifiers, and the junction of the two resistors is connected to the inverting terminal of the second amplifier. Means are provided to compare the aforementioned signal voltage with both the first and second reference voltages and to actuate an alarm if the signal voltage is higher than the first reference voltage or lower than the second reference voltage.

  8. A Bidding Methodology by Nash Equilibrium for Finite Generators Participating in Imperfect Electricity Markets

    NASA Astrophysics Data System (ADS)

    Satyaramesh, P. V.

    2014-01-01

    This paper presents an application of finite n-person non-cooperative game theory for analyzing bidding strategies of generators in a deregulated energy marketplace with Pool Bilateral contracts so as to maximize their net profits. A new methodology to build bidding methodology for generators participating in oligopoly electricity market has been proposed in this paper. It is assumed that each generator bids a supply function. This methodology finds out the coefficients in the supply function of generators in order to maximize benefits in an environment of competing rival bidders. A natural choice for developing strategies is Nash Equilibrium (NE) model incorporating mixed strategies, for solving the bidding problem of electrical market. Associated optimal profits are evaluated for a combination of set of pure strategies of bidding of generators, and payoff matrix has been constructed. The optimal payoff is calculated by using NE. An attempt has also been made to minimize the gap between the optimal payoff and the payoff obtained by a possible mixed strategies combination. The algorithm is coded in MATLAB. A numerical example is used to illustrate the essential features of the approach and the results are proved to be the optimal values.

  9. A depth-first search algorithm to compute elementary flux modes by linear programming.

    PubMed

    Quek, Lake-Ee; Nielsen, Lars K

    2014-07-30

    The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (<400 reactions), existing approaches based on the Double Description method must iterate through a large number of combinatorial candidates, thus imposing an immense processor and memory demand. Based on an alternative elementarity test, we developed a depth-first search algorithm using linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints.

  10. The Minkowski sum of a zonotope and the Voronoi polytope of the root lattice E{sub 7}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grishukhin, Vyacheslav P

    2012-11-30

    We show that the Minkowski sum P{sub V}(E{sub 7})+Z(U) of the Voronoi polytope P{sub V}(E{sub 7}) of the root lattice E{sub 7} and the zonotope Z(U) is a 7-dimensional parallelohedron if and only if the set U consists of minimal vectors of the dual lattice E{sub 7}{sup *} up to scalar multiplication, and U does not contain forbidden sets. The minimal vectors of E{sub 7} are the vectors r of the classical root system E{sub 7}. If the r{sup 2}-norm of the roots is set equal to 2, then the scalar products of minimal vectors from the dual lattice onlymore » take the values {+-}1/2. A set of minimal vectors is referred to as forbidden if it consists of six vectors, and the directions of some of these vectors can be changed so as to obtain a set of six vectors with all the pairwise scalar products equal to 1/2. Bibliography: 11 titles.« less

  11. Specialized minimal PDFs for optimized LHC calculations.

    PubMed

    Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Rojo, Juan

    2016-01-01

    We present a methodology for the construction of parton distribution functions (PDFs) designed to provide an accurate representation of PDF uncertainties for specific processes or classes of processes with a minimal number of PDF error sets: specialized minimal PDF sets, or SM-PDFs. We construct these SM-PDFs in such a way that sets corresponding to different input processes can be combined without losing information, specifically as regards their correlations, and that they are robust upon smooth variations of the kinematic cuts. The proposed strategy never discards information, so that the SM-PDF sets can be enlarged by the addition of new processes, until the prior PDF set is eventually recovered for a large enough set of processes. We illustrate the method by producing SM-PDFs tailored to Higgs, top-quark pair, and electroweak gauge boson physics, and we determine that, when the PDF4LHC15 combined set is used as the prior, around 11, 4, and 11 Hessian eigenvectors, respectively, are enough to fully describe the corresponding processes.

  12. Effects of Gradient Coil Noise and Gradient Coil Replacement on the Reproducibility of Resting State Networks.

    PubMed

    Bagarinao, Epifanio; Tsuzuki, Erina; Yoshida, Yukina; Ozawa, Yohei; Kuzuya, Maki; Otani, Takashi; Koyama, Shuji; Isoda, Haruo; Watanabe, Hirohisa; Maesawa, Satoshi; Naganawa, Shinji; Sobue, Gen

    2018-01-01

    The stability of the MRI scanner throughout a given study is critical in minimizing hardware-induced variability in the acquired imaging data set. However, MRI scanners do malfunction at times, which could generate image artifacts and would require the replacement of a major component such as its gradient coil. In this article, we examined the effect of low intensity, randomly occurring hardware-related noise due to a faulty gradient coil on brain morphometric measures derived from T1-weighted images and resting state networks (RSNs) constructed from resting state functional MRI. We also introduced a method to detect and minimize the effect of the noise associated with a faulty gradient coil. Finally, we assessed the reproducibility of these morphometric measures and RSNs before and after gradient coil replacement. Our results showed that gradient coil noise, even at relatively low intensities, could introduce a large number of voxels exhibiting spurious significant connectivity changes in several RSNs. However, censoring the affected volumes during the analysis could minimize, if not completely eliminate, these spurious connectivity changes and could lead to reproducible RSNs even after gradient coil replacement.

  13. Effects of Gradient Coil Noise and Gradient Coil Replacement on the Reproducibility of Resting State Networks

    PubMed Central

    Bagarinao, Epifanio; Tsuzuki, Erina; Yoshida, Yukina; Ozawa, Yohei; Kuzuya, Maki; Otani, Takashi; Koyama, Shuji; Isoda, Haruo; Watanabe, Hirohisa; Maesawa, Satoshi; Naganawa, Shinji; Sobue, Gen

    2018-01-01

    The stability of the MRI scanner throughout a given study is critical in minimizing hardware-induced variability in the acquired imaging data set. However, MRI scanners do malfunction at times, which could generate image artifacts and would require the replacement of a major component such as its gradient coil. In this article, we examined the effect of low intensity, randomly occurring hardware-related noise due to a faulty gradient coil on brain morphometric measures derived from T1-weighted images and resting state networks (RSNs) constructed from resting state functional MRI. We also introduced a method to detect and minimize the effect of the noise associated with a faulty gradient coil. Finally, we assessed the reproducibility of these morphometric measures and RSNs before and after gradient coil replacement. Our results showed that gradient coil noise, even at relatively low intensities, could introduce a large number of voxels exhibiting spurious significant connectivity changes in several RSNs. However, censoring the affected volumes during the analysis could minimize, if not completely eliminate, these spurious connectivity changes and could lead to reproducible RSNs even after gradient coil replacement. PMID:29725294

  14. Scaling of phloem structure and optimality of sugar transport in conifer needles

    NASA Astrophysics Data System (ADS)

    Jensen, Kaare H.; Ronellenfitsch, Henrik; Liesche, Johannes; Holbrook, N. Michele; Schulz, Alexander; Katifori, Eleni

    2015-11-01

    The phloem vascular system facilitates transport of energy-rich sugar and signalling molecules in plants, thus permitting long-range communication within the organism and growth of non-photosynthesizing organs such as roots and fruits. The flow is driven by osmotic pressure, generated by differences in sugar concentration between distal parts of the plant. The phloem is an intricate distribution system, and many questions about its regulation and structural diversity remain unanswered. Here, we investigate the phloem structure in the simplest possible geometry: a linear leaf, found, for example, in the needles of conifer trees. We measure the phloem structure in four tree species representing a diverse set of habitats and needle sizes, from 1 cm (Picea omorika) to 35 cm (Pinus palustris). We show that the phloem shares common traits across these four species and find that the size of its conductive elements obeys a power law. We present a minimal model that accounts for these common traits and takes into account the transport strategy and natural constraints. This minimal model predicts a power law phloem distribution consistent with transport energy minimization, suggesting that energetics are more important than translocation speed at the leaf level.

  15. An optimal autonomous microgrid cluster based on distributed generation droop parameter optimization and renewable energy sources using an improved grey wolf optimizer

    NASA Astrophysics Data System (ADS)

    Moazami Goodarzi, Hamed; Kazemi, Mohammad Hosein

    2018-05-01

    Microgrid (MG) clustering is regarded as an important driver in improving the robustness of MGs. However, little research has been conducted on providing appropriate MG clustering. This article addresses this shortfall. It proposes a novel multi-objective optimization approach for finding optimal clustering of autonomous MGs by focusing on variables such as distributed generation (DG) droop parameters, the location and capacity of DG units, renewable energy sources, capacitors and powerline transmission. Power losses are minimized and voltage stability is improved while virtual cut-set lines with minimum power transmission for clustering MGs are obtained. A novel chaotic grey wolf optimizer (CGWO) algorithm is applied to solve the proposed multi-objective problem. The performance of the approach is evaluated by utilizing a 69-bus MG in several scenarios.

  16. A new method for producing automated seismic bulletins: Probabilistic event detection, association, and location

    DOE PAGES

    Draelos, Timothy J.; Ballard, Sanford; Young, Christopher J.; ...

    2015-10-01

    Given a set of observations within a specified time window, a fitness value is calculated at each grid node by summing station-specific conditional fitness values. Assuming each observation was generated by a refracted P wave, these values are proportional to the conditional probabilities that each observation was generated by a seismic event at the grid node. The node with highest fitness value is accepted as a hypothetical event location, subject to some minimal fitness value, and all arrivals within a longer time window consistent with that event are associated with it. During the association step, a variety of different phasesmore » are considered. In addition, once associated with an event, an arrival is removed from further consideration. While unassociated arrivals remain, the search for other events is repeated until none are identified.« less

  17. The benefits of computer-generated feedback for mathematics problem solving.

    PubMed

    Fyfe, Emily R; Rittle-Johnson, Bethany

    2016-07-01

    The goal of the current research was to better understand when and why feedback has positive effects on learning and to identify features of feedback that may improve its efficacy. In a randomized experiment, second-grade children received instruction on a correct problem-solving strategy and then solved a set of relevant problems. Children were assigned to receive no feedback, immediate feedback, or summative feedback from the computer. On a posttest the following day, feedback resulted in higher scores relative to no feedback for children who started with low prior knowledge. Immediate feedback was particularly effective, facilitating mastery of the material for children with both low and high prior knowledge. Results suggest that minimal computer-generated feedback can be a powerful form of guidance during problem solving. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Principal Component Analysis of Thermographic Data

    NASA Technical Reports Server (NTRS)

    Winfree, William P.; Cramer, K. Elliott; Zalameda, Joseph N.; Howell, Patricia A.; Burke, Eric R.

    2015-01-01

    Principal Component Analysis (PCA) has been shown effective for reducing thermographic NDE data. While a reliable technique for enhancing the visibility of defects in thermal data, PCA can be computationally intense and time consuming when applied to the large data sets typical in thermography. Additionally, PCA can experience problems when very large defects are present (defects that dominate the field-of-view), since the calculation of the eigenvectors is now governed by the presence of the defect, not the "good" material. To increase the processing speed and to minimize the negative effects of large defects, an alternative method of PCA is being pursued where a fixed set of eigenvectors, generated from an analytic model of the thermal response of the material under examination, is used to process the thermal data from composite materials. This method has been applied for characterization of flaws.

  19. Effects of an ozone-generating air purifier on indoor secondary particles in three residential dwellings.

    PubMed

    Hubbard, H F; Coleman, B K; Sarwar, G; Corsi, R L

    2005-12-01

    The use of indoor ozone generators as air purifiers has steadily increased over the past decade. Many ozone generators are marketed to consumers for their ability to eliminate odors and microbial agents and to improve health. In addition to the harmful effects of ozone, recent studies have shown that heterogeneous and homogeneous reactions between ozone and some unsaturated hydrocarbons can be an important source of indoor secondary pollutants, including free radicals, carbonyls, carboxylic acids, and fine particles. Experiments were conducted in one apartment and two detached single-family dwellings in Austin, TX, to assess the effects of an ozone generator on indoor secondary organic aerosol concentrations in actual residential settings. Ozone was generated using a commercial ozone generator marketed as an air purifier, and particle measurements were recorded before, during, and after the release of terpenes from a pine oil-based cleaning product. Particle number concentration, ozone concentration, and air exchange rate were measured during each experiment. Particle number and mass concentrations increased when both terpenes and ozone were present at elevated levels. Experimental results indicate that ozone generators in the presence of terpene sources facilitate the growth of indoor fine particles in residential indoor atmospheres. Human exposure to secondary organic particles can be reduced by minimizing the intentional release of ozone, particularly in the presence of terpene sources. Past studies have shown that ozone-initiated indoor chemistry can lead to elevated concentrations of fine particulate matter, but have generally been completed in controlled laboratory environments and office buildings. We explored the effects of an explicit ozone generator marketed as an air purifier on the formation of secondary organic aerosol mass in actual residential indoor settings. Results indicate significant increases in number and mass concentrations for particles <0.7 microns in diameter, particularly when an ozone generator is used in the presence of a terpene source such as a pine oil-based cleaner. These results add evidence to the potentially harmful effects of ozone generation in residential environments.

  20. Using Delft3D to Simulate Current Energy Conversion

    NASA Astrophysics Data System (ADS)

    James, S. C.; Chartrand, C.; Roberts, J.

    2015-12-01

    As public concern with renewable energy increases, current energy conversion (CEC) technology is being developed to optimize energy output and minimize environmental impact. CEC turbines generate energy from tidal and current systems and create wakes that interact with turbines located downstream of a device. The placement of devices can greatly influence power generation and structural reliability. CECs can also alter the ecosystem process surrounding the turbines, such as flow regimes, sediment dynamics, and water quality. Software is needed to investigate specific CEC sites to simulate power generation and hydrodynamic responses of a flow through a CEC turbine array. This work validates Delft3D against several flume experiments by simulating the power generation and hydrodynamic response of flow through a turbine or actuator disc(s). Model parameters are then calibrated against these data sets to reproduce momentum removal and wake recovery data with 3-D flow simulations. Simulated wake profiles and turbulence intensities compare favorably to the experimental data and demonstrate the utility and accuracy of a fast-running tool for future siting and analysis of CEC arrays in complex domains.

  1. End-of-Life Caregiver’s Perspectives on their Role: Generative Caregiving

    PubMed Central

    Phillips, Linda R.; Reed, Pamela G.

    2010-01-01

    Purpose: To describe caregivers’ constructions of their caregiving role in providing care to elders they knew were dying from life-limiting illnesses. Design and Methods: Study involved in-depth interviews with 27 family caregivers. Data were analyzed using constant comparative analysis. Results: Four categories were identified: centering life on the elder, maintaining a sense of normalcy, minimizing suffering, and gift giving. Generative caregiving was the term adopted to describe the end-of-life (EOL) caregiving role. Generative caregiving is situated in the present with a goal to enhance the elder’s present quality of life, but also draws from the past and projects into the future with a goal to create a legacy that honors the elder and the elder–caregiver relationship. Implications: Results contribute to our knowledge about EOL caregiving by providing an explanatory framework and setting the caregiving experience in the context of life-span development. PMID:19651667

  2. Development of multiplex microsatellite PCR panels for the seagrass Thalassia hemprichii (Hydrocharitaceae)1

    PubMed Central

    van Dijk, Kor-jent; Mellors, Jane; Waycott, Michelle

    2014-01-01

    • Premise of the study: New microsatellites were developed for the seagrass Thalassia hemprichii (Hydrocharitaceae), a long-lived seagrass species that is found throughout the shallow waters of tropical and subtropical Indo-West Pacific. Three multiplex PCR panels were designed utilizing new and previously developed markers, resulting in a toolkit for generating a 16-locus genotype. • Methods and Results: Through the use of microsatellite enrichment and next-generation sequencing, 16 new, validated, polymorphic microsatellite markers were isolated. Diversity was between two and four alleles per locus totaling 36 alleles. These markers, plus previously developed microsatellite markers for T. hemprichii and T. testudinum, were tested for suitability in multiplex PCR panels. • Conclusions: The generation of an easily replicated suite of multiplex panels of codominant molecular markers will allow for high-resolution and detailed genetic structure analysis and clonality assessment with minimal genotyping costs. We suggest the establishment of a T. hemprichii primer convention for the unification of future data sets. PMID:25383269

  3. Systems Biology Perspectives on Minimal and Simpler Cells

    PubMed Central

    Xavier, Joana C.; Patil, Kiran Raosaheb

    2014-01-01

    SUMMARY The concept of the minimal cell has fascinated scientists for a long time, from both fundamental and applied points of view. This broad concept encompasses extreme reductions of genomes, the last universal common ancestor (LUCA), the creation of semiartificial cells, and the design of protocells and chassis cells. Here we review these different areas of research and identify common and complementary aspects of each one. We focus on systems biology, a discipline that is greatly facilitating the classical top-down and bottom-up approaches toward minimal cells. In addition, we also review the so-called middle-out approach and its contributions to the field with mathematical and computational models. Owing to the advances in genomics technologies, much of the work in this area has been centered on minimal genomes, or rather minimal gene sets, required to sustain life. Nevertheless, a fundamental expansion has been taking place in the last few years wherein the minimal gene set is viewed as a backbone of a more complex system. Complementing genomics, progress is being made in understanding the system-wide properties at the levels of the transcriptome, proteome, and metabolome. Network modeling approaches are enabling the integration of these different omics data sets toward an understanding of the complex molecular pathways connecting genotype to phenotype. We review key concepts central to the mapping and modeling of this complexity, which is at the heart of research on minimal cells. Finally, we discuss the distinction between minimizing the number of cellular components and minimizing cellular complexity, toward an improved understanding and utilization of minimal and simpler cells. PMID:25184563

  4. Comparing and characterizing three-dimensional point clouds derived by structure from motion photogrammetry

    NASA Astrophysics Data System (ADS)

    Schwind, Michael

    Structure from Motion (SfM) is a photogrammetric technique whereby three-dimensional structures (3D) are estimated from overlapping two-dimensional (2D) image sequences. It is studied in the field of computer vision and utilized in fields such as archeology, engineering, and the geosciences. Currently, many SfM software packages exist that allow for the generation of 3D point clouds. Little work has been done to show how topographic data generated from these software differ over varying terrain types and why they might produce different results. This work aims to compare and characterize the differences between point clouds generated by three different SfM software packages: two well-known proprietary solutions (Pix4D, Agisoft PhotoScan) and one open source solution (OpenDroneMap). Five terrain types were imaged utilizing a DJI Phantom 3 Professional small unmanned aircraft system (sUAS). These terrain types include a marsh environment, a gently sloped sandy beach and jetties, a forested peninsula, a house, and a flat parking lot. Each set of imagery was processed with each software and then directly compared to each other. Before processing the sets of imagery, the software settings were analyzed and chosen in a manner that allowed for the most similar settings to be set across the three software types. This was done in an attempt to minimize point cloud differences caused by dissimilar settings. The characteristics of the resultant point clouds were then compared with each other. Furthermore, a terrestrial light detection and ranging (LiDAR) survey was conducted over the flat parking lot using a Riegl VZ- 400 scanner. This data served as ground truth in order to conduct an accuracy assessment of the sUAS-SfM point clouds. Differences were found between the different results, apparent not only in the characteristics of the clouds, but also the accuracy. This study allows for users of SfM photogrammetry to have a better understanding of how different processing software compare and the inherent sensitivity of SfM automation in 3D reconstruction. Because this study used mostly default settings within the software, it would be beneficial for further research to investigate the effects of changing parameters have on the fidelity of point cloud datasets generated from different SfM software packages.

  5. Method and system for fault accommodation of machines

    NASA Technical Reports Server (NTRS)

    Goebel, Kai Frank (Inventor); Subbu, Rajesh Venkat (Inventor); Rausch, Randal Thomas (Inventor); Frederick, Dean Kimball (Inventor)

    2011-01-01

    A method for multi-objective fault accommodation using predictive modeling is disclosed. The method includes using a simulated machine that simulates a faulted actual machine, and using a simulated controller that simulates an actual controller. A multi-objective optimization process is performed, based on specified control settings for the simulated controller and specified operational scenarios for the simulated machine controlled by the simulated controller, to generate a Pareto frontier-based solution space relating performance of the simulated machine to settings of the simulated controller, including adjustment to the operational scenarios to represent a fault condition of the simulated machine. Control settings of the actual controller are adjusted, represented by the simulated controller, for controlling the actual machine, represented by the simulated machine, in response to a fault condition of the actual machine, based on the Pareto frontier-based solution space, to maximize desirable operational conditions and minimize undesirable operational conditions while operating the actual machine in a region of the solution space defined by the Pareto frontier.

  6. A generalized fuzzy linear programming approach for environmental management problem under uncertainty.

    PubMed

    Fan, Yurui; Huang, Guohe; Veawab, Amornvadee

    2012-01-01

    In this study, a generalized fuzzy linear programming (GFLP) method was developed to deal with uncertainties expressed as fuzzy sets that exist in the constraints and objective function. A stepwise interactive algorithm (SIA) was advanced to solve GFLP model and generate solutions expressed as fuzzy sets. To demonstrate its application, the developed GFLP method was applied to a regional sulfur dioxide (SO2) control planning model to identify effective SO2 mitigation polices with a minimized system performance cost under uncertainty. The results were obtained to represent the amount of SO2 allocated to different control measures from different sources. Compared with the conventional interval-parameter linear programming (ILP) approach, the solutions obtained through GFLP were expressed as fuzzy sets, which can provide intervals for the decision variables and objective function, as well as related possibilities. Therefore, the decision makers can make a tradeoff between model stability and the plausibility based on solutions obtained through GFLP and then identify desired policies for SO2-emission control under uncertainty.

  7. Complex Instruction Set Quantum Computing

    NASA Astrophysics Data System (ADS)

    Sanders, G. D.; Kim, K. W.; Holton, W. C.

    1998-03-01

    In proposed quantum computers, electromagnetic pulses are used to implement logic gates on quantum bits (qubits). Gates are unitary transformations applied to coherent qubit wavefunctions and a universal computer can be created using a minimal set of gates. By applying many elementary gates in sequence, desired quantum computations can be performed. This reduced instruction set approach to quantum computing (RISC QC) is characterized by serial application of a few basic pulse shapes and a long coherence time. However, the unitary matrix of the overall computation is ultimately a unitary matrix of the same size as any of the elementary matrices. This suggests that we might replace a sequence of reduced instructions with a single complex instruction using an optimally taylored pulse. We refer to this approach as complex instruction set quantum computing (CISC QC). One trades the requirement for long coherence times for the ability to design and generate potentially more complex pulses. We consider a model system of coupled qubits interacting through nearest neighbor coupling and show that CISC QC can reduce the time required to perform quantum computations.

  8. Renewable Electricity: Insights for the Coming Decade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stark, Camila; Pless, Jacquelyn; Logan, Jeffrey

    2015-02-01

    A sophisticated set of renewable electricity (RE) generation technologies is now commercially available. Globally, RE captured approximately half of all capacity additions since 2011. The cost of RE is already competitive with fossil fuels in some areas around the world, and prices are anticipated to continue to decline over the next decade. RE options, led by wind and solar, are part of a suite of technologies and business solutions that are transforming electricity sectors around the world. Renewable deployment is expected to continue due to: increasingly competitive economics; favorable environmental characteristics such as low water use, and minimal local airmore » pollution and greenhouse gas (GHG) emissions; complementary risk profiles when paired with natural gas generators; strong support from stakeholders. Despite this positive outlook for renewables, the collapse in global oil prices since mid-2014 and continued growth in natural gas supply in the United States--due to the development of low-cost shale gas--raise questions about the potential impacts of fossil fuel prices on RE. Today, oil plays a very minor role in the electricity sectors of most countries, so direct impacts on RE are likely to be minimal (except where natural gas prices are indexed on oil). Natural gas and RE generating options appear to be more serious competitors than oil and renewables. Low gas prices raise the hurdle for RE to be cost competitive. Additionally, although RE emits far less GHG than natural gas, both natural gas and RE offer the benefits of reducing carbon relative to coal and oil (see Section 4.1 for more detail on the GHG intensity of electricity technologies). However, many investors and decision makers are becoming aware of the complementary benefits of pairing natural gas and renewables to minimize risk of unstable fuel prices and maintain the reliability of electricity to the grid.« less

  9. Detection of CMOS bridging faults using minimal stuck-at fault test sets

    NASA Technical Reports Server (NTRS)

    Ijaz, Nabeel; Frenzel, James F.

    1993-01-01

    The performance of minimal stuck-at fault test sets at detecting bridging faults are evaluated. New functional models of circuit primitives are presented which allow accurate representation of bridging faults under switch-level simulation. The effectiveness of the patterns is evaluated using both voltage and current testing.

  10. Progressive Sampling Technique for Efficient and Robust Uncertainty and Sensitivity Analysis of Environmental Systems Models: Stability and Convergence

    NASA Astrophysics Data System (ADS)

    Sheikholeslami, R.; Hosseini, N.; Razavi, S.

    2016-12-01

    Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).

  11. Defining an essence of structure determining residue contacts in proteins.

    PubMed

    Sathyapriya, R; Duarte, Jose M; Stehr, Henning; Filippis, Ioannis; Lappe, Michael

    2009-12-01

    The network of native non-covalent residue contacts determines the three-dimensional structure of a protein. However, not all contacts are of equal structural significance, and little knowledge exists about a minimal, yet sufficient, subset required to define the global features of a protein. Characterisation of this "structural essence" has remained elusive so far: no algorithmic strategy has been devised to-date that could outperform a random selection in terms of 3D reconstruction accuracy (measured as the Ca RMSD). It is not only of theoretical interest (i.e., for design of advanced statistical potentials) to identify the number and nature of essential native contacts-such a subset of spatial constraints is very useful in a number of novel experimental methods (like EPR) which rely heavily on constraint-based protein modelling. To derive accurate three-dimensional models from distance constraints, we implemented a reconstruction pipeline using distance geometry. We selected a test-set of 12 protein structures from the four major SCOP fold classes and performed our reconstruction analysis. As a reference set, series of random subsets (ranging from 10% to 90% of native contacts) are generated for each protein, and the reconstruction accuracy is computed for each subset. We have developed a rational strategy, termed "cone-peeling" that combines sequence features and network descriptors to select minimal subsets that outperform the reference sets. We present, for the first time, a rational strategy to derive a structural essence of residue contacts and provide an estimate of the size of this minimal subset. Our algorithm computes sparse subsets capable of determining the tertiary structure at approximately 4.8 A Ca RMSD with as little as 8% of the native contacts (Ca-Ca and Cb-Cb). At the same time, a randomly chosen subset of native contacts needs about twice as many contacts to reach the same level of accuracy. This "structural essence" opens new avenues in the fields of structure prediction, empirical potentials and docking.

  12. Defining an Essence of Structure Determining Residue Contacts in Proteins

    PubMed Central

    Sathyapriya, R.; Duarte, Jose M.; Stehr, Henning; Filippis, Ioannis; Lappe, Michael

    2009-01-01

    The network of native non-covalent residue contacts determines the three-dimensional structure of a protein. However, not all contacts are of equal structural significance, and little knowledge exists about a minimal, yet sufficient, subset required to define the global features of a protein. Characterisation of this “structural essence” has remained elusive so far: no algorithmic strategy has been devised to-date that could outperform a random selection in terms of 3D reconstruction accuracy (measured as the Ca RMSD). It is not only of theoretical interest (i.e., for design of advanced statistical potentials) to identify the number and nature of essential native contacts—such a subset of spatial constraints is very useful in a number of novel experimental methods (like EPR) which rely heavily on constraint-based protein modelling. To derive accurate three-dimensional models from distance constraints, we implemented a reconstruction pipeline using distance geometry. We selected a test-set of 12 protein structures from the four major SCOP fold classes and performed our reconstruction analysis. As a reference set, series of random subsets (ranging from 10% to 90% of native contacts) are generated for each protein, and the reconstruction accuracy is computed for each subset. We have developed a rational strategy, termed “cone-peeling” that combines sequence features and network descriptors to select minimal subsets that outperform the reference sets. We present, for the first time, a rational strategy to derive a structural essence of residue contacts and provide an estimate of the size of this minimal subset. Our algorithm computes sparse subsets capable of determining the tertiary structure at approximately 4.8 Å Ca RMSD with as little as 8% of the native contacts (Ca-Ca and Cb-Cb). At the same time, a randomly chosen subset of native contacts needs about twice as many contacts to reach the same level of accuracy. This “structural essence” opens new avenues in the fields of structure prediction, empirical potentials and docking. PMID:19997489

  13. Day and Night Closed-Loop Control Using the Integrated Medtronic Hybrid Closed-Loop System in Type 1 Diabetes at Diabetes Camp.

    PubMed

    Ly, Trang T; Roy, Anirban; Grosman, Benyamin; Shin, John; Campbell, Alex; Monirabbasi, Salman; Liang, Bradley; von Eyben, Rie; Shanmugham, Satya; Clinton, Paula; Buckingham, Bruce A

    2015-07-01

    To evaluate the feasibility and efficacy of a fully integrated hybrid closed-loop (HCL) system (Medtronic MiniMed Inc., Northridge, CA), in day and night closed-loop control in subjects with type 1 diabetes, both in an inpatient setting and during 6 days at diabetes camp. The Medtronic MiniMed HCL system consists of a fourth generation (4S) glucose sensor, a sensor transmitter, and an insulin pump using a modified proportional-integral-derivative (PID) insulin feedback algorithm with safety constraints. Eight subjects were studied over 48 h in an inpatient setting. This was followed by a study of 21 subjects for 6 days at diabetes camp, randomized to either the closed-loop control group using the HCL system or to the group using the Medtronic MiniMed 530G with threshold suspend (control group). The overall mean sensor glucose percent time in range 70-180 mg/dL was similar between the groups (73.1% vs. 69.9%, control vs. HCL, respectively) (P = 0.580). Meter glucose values between 70 and 180 mg/dL were also similar between the groups (73.6% vs. 63.2%, control vs. HCL, respectively) (P = 0.086). The mean absolute relative difference of the 4S sensor was 10.8 ± 10.2%, when compared with plasma glucose values in the inpatient setting, and 12.6 ± 11.0% compared with capillary Bayer CONTOUR NEXT LINK glucose meter values during 6 days at camp. In the first clinical study of this fully integrated system using an investigational PID algorithm, the system did not demonstrate improved glucose control compared with sensor-augmented pump therapy alone. The system demonstrated good connectivity and improved sensor performance. © 2015 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered.

  14. Rank-order-selective neurons form a temporal basis set for the generation of motor sequences.

    PubMed

    Salinas, Emilio

    2009-04-08

    Many behaviors are composed of a series of elementary motor actions that must occur in a specific order, but the neuronal mechanisms by which such motor sequences are generated are poorly understood. In particular, if a sequence consists of a few motor actions, a primate can learn to replicate it from memory after practicing it for just a few trials. How do the motor and premotor areas of the brain assemble motor sequences so fast? The network model presented here reveals part of the solution to this problem. The model is based on experiments showing that, during the performance of motor sequences, some cortical neurons are always activated at specific times, regardless of which motor action is being executed. In the model, a population of such rank-order-selective (ROS) cells drives a layer of downstream motor neurons so that these generate specific movements at different times in different sequences. A key ingredient of the model is that the amplitude of the ROS responses must be modulated by sequence identity. Because of this modulation, which is consistent with experimental reports, the network is able not only to produce multiple sequences accurately but also to learn a new sequence with minimal changes in connectivity. The ROS neurons modulated by sequence identity thus serve as a basis set for constructing arbitrary sequences of motor responses downstream. The underlying mechanism is analogous to the mechanism described in parietal areas for generating coordinate transformations in the spatial domain.

  15. RANK-ORDER-SELECTIVE NEURONS FORM A TEMPORAL BASIS SET FOR THE GENERATION OF MOTOR SEQUENCES

    PubMed Central

    Salinas, Emilio

    2009-01-01

    Many behaviors are composed of a series of elementary motor actions that must occur in a specific order, but the neuronal mechanisms by which such motor sequences are generated are poorly understood. In particular, if a sequence consists of a few motor actions, a primate can learn to replicate it from memory after practicing it for just a few trials. How do the motor and premotor areas of the brain assemble motor sequences so fast? The network model presented here reveals part of the solution to this problem. The model is based on experiments showing that, during the performance of motor sequences, some cortical neurons are always activated at specific times, regardless of which motor action is being executed. In the model, a population of such rank-order-selective (ROS) cells drives a layer of downstream motor neurons so that these generate specific movements at different times in different sequences. A key ingredient of the model is that the amplitude of the ROS responses must be modulated by sequence identity. Because of this modulation, which is consistent with experimental reports, the network is able not only to produce multiple sequences accurately but also to learn a new sequence with minimal changes in connectivity. The ROS neurons modulated by sequence identity thus serve as a basis set for constructing arbitrary sequences of motor responses downstream. The underlying mechanism is analogous to the mechanism described in parietal areas for generating coordinate transformations in the spatial domain. PMID:19357265

  16. System for real-time generation of georeferenced terrain models

    NASA Astrophysics Data System (ADS)

    Schultz, Howard J.; Hanson, Allen R.; Riseman, Edward M.; Stolle, Frank; Zhu, Zhigang; Hayward, Christopher D.; Slaymaker, Dana

    2001-02-01

    A growing number of law enforcement applications, especially in the areas of border security, drug enforcement and anti- terrorism require high-resolution wide area surveillance from unmanned air vehicles. At the University of Massachusetts we are developing an aerial reconnaissance system capable of generating high resolution, geographically registered terrain models (in the form of a seamless mosaic) in real-time from a single down-looking digital video camera. The efficiency of the processing algorithms, as well as the simplicity of the hardware, will provide the user with the ability to produce and roam through stereoscopic geo-referenced mosaic images in real-time, and to automatically generate highly accurate 3D terrain models offline in a fraction of the time currently required by softcopy conventional photogrammetry systems. The system is organized around a set of integrated sensor and software components. The instrumentation package is comprised of several inexpensive commercial-off-the-shelf components, including a digital video camera, a differential GPS, and a 3-axis heading and reference system. At the heart of the system is a set of software tools for image registration, mosaic generation, geo-location and aircraft state vector recovery. Each process is designed to efficiently handle the data collected by the instrument package. Particular attention is given to minimizing geospatial errors at each stage, as well as modeling propagation of errors through the system. Preliminary results for an urban and forested scene are discussed in detail.

  17. Dispatch Control with PEV Charging and Renewables for Multiplayer Game Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Nathan; Johnson, Brian; McJunkin, Timothy

    This paper presents a demand response model for a hypothetical microgrid that integrates renewable resources and plug-in electric vehicle (PEV) charging systems. It is assumed that the microgrid has black start capability and that external generation is available for purchase while grid connected to satisfy additional demand. The microgrid is developed such that in addition to renewable, non-dispatchable generation from solar, wind and run of the river hydroelectric resources, local dispatchable generation is available in the form of small hydroelectric and moderately sized gas and coal fired facilities. To accurately model demand, the load model is separated into independent residential,more » commercial, industrial, and PEV charging systems. These are dispatched and committed based on a mixed integer linear program developed to minimize the cost of generation and load shedding while satisfying constraints associated with line limits, conservation of energy, and ramp rates of the generation units. The model extends a research tool to longer time frames intended for policy setting and educational environments and provides a realistic and intuitive understanding of beneficial and challenging aspects of electrification of vehicles combined with integration of green electricity production.« less

  18. QuickFF: A program for a quick and easy derivation of force fields for metal-organic frameworks from ab initio input.

    PubMed

    Vanduyfhuys, Louis; Vandenbrande, Steven; Verstraelen, Toon; Schmid, Rochus; Waroquier, Michel; Van Speybroeck, Veronique

    2015-05-15

    QuickFF is a software package to derive accurate force fields for isolated and complex molecular systems in a quick and easy manner. Apart from its general applicability, the program has been designed to generate force fields for metal-organic frameworks in an automated fashion. The force field parameters for the covalent interaction are derived from ab initio data. The mathematical expression of the covalent energy is kept simple to ensure robustness and to avoid fitting deficiencies as much as possible. The user needs to produce an equilibrium structure and a Hessian matrix for one or more building units. Afterward, a force field is generated for the system using a three-step method implemented in QuickFF. The first two steps of the methodology are designed to minimize correlations among the force field parameters. In the last step, the parameters are refined by imposing the force field parameters to reproduce the ab initio Hessian matrix in Cartesian coordinate space as accurate as possible. The method is applied on a set of 1000 organic molecules to show the easiness of the software protocol. To illustrate its application to metal-organic frameworks (MOFs), QuickFF is used to determine force fields for MIL-53(Al) and MOF-5. For both materials, accurate force fields were already generated in literature but they requested a lot of manual interventions. QuickFF is a tool that can easily be used by anyone with a basic knowledge of performing ab initio calculations. As a result, accurate force fields are generated with minimal effort. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  19. A Locally Optimal Algorithm for Estimating a Generating Partition from an Observed Time Series and Its Application to Anomaly Detection.

    PubMed

    Ghalyan, Najah F; Miller, David J; Ray, Asok

    2018-06-12

    Estimation of a generating partition is critical for symbolization of measurements from discrete-time dynamical systems, where a sequence of symbols from a (finite-cardinality) alphabet may uniquely specify the underlying time series. Such symbolization is useful for computing measures (e.g., Kolmogorov-Sinai entropy) to identify or characterize the (possibly unknown) dynamical system. It is also useful for time series classification and anomaly detection. The seminal work of Hirata, Judd, and Kilminster (2004) derives a novel objective function, akin to a clustering objective, that measures the discrepancy between a set of reconstruction values and the points from the time series. They cast estimation of a generating partition via the minimization of their objective function. Unfortunately, their proposed algorithm is nonconvergent, with no guarantee of finding even locally optimal solutions with respect to their objective. The difficulty is a heuristic-nearest neighbor symbol assignment step. Alternatively, we develop a novel, locally optimal algorithm for their objective. We apply iterative nearest-neighbor symbol assignments with guaranteed discrepancy descent, by which joint, locally optimal symbolization of the entire time series is achieved. While most previous approaches frame generating partition estimation as a state-space partitioning problem, we recognize that minimizing the Hirata et al. (2004) objective function does not induce an explicit partitioning of the state space, but rather the space consisting of the entire time series (effectively, clustering in a (countably) infinite-dimensional space). Our approach also amounts to a novel type of sliding block lossy source coding. Improvement, with respect to several measures, is demonstrated over popular methods for symbolizing chaotic maps. We also apply our approach to time-series anomaly detection, considering both chaotic maps and failure application in a polycrystalline alloy material.

  20. Heel impact forces during barefoot versus minimally shod walking among Tarahumara subsistence farmers and urban Americans

    PubMed Central

    Koch, Elizabeth; Holowka, Nicholas B.; Lieberman, Daniel E.

    2018-01-01

    Despite substantial recent interest in walking barefoot and in minimal footwear, little is known about potential differences in walking biomechanics when unshod versus minimally shod. To test the hypothesis that heel impact forces are similar during barefoot and minimally shod walking, we analysed ground reaction forces recorded in both conditions with a pedography platform among indigenous subsistence farmers, the Tarahumara of Mexico, who habitually wear minimal sandals, as well as among urban Americans wearing commercially available minimal sandals. Among both the Tarahumara (n = 35) and Americans (n = 30), impact peaks generated in sandals had significantly (p < 0.05) higher force magnitudes, slower loading rates and larger vertical impulses than during barefoot walking. These kinetic differences were partly due to individuals' significantly greater effective mass when walking in sandals. Our results indicate that, in general, people tread more lightly when walking barefoot than in minimal footwear. Further research is needed to test if the variations in impact peaks generated by walking barefoot or in minimal shoes have consequences for musculoskeletal health. PMID:29657826

  1. Pilot Integration of HIV Screening and Healthcare Settings with Multi- Component Social Network and Partner Testing for HIV Detection.

    PubMed

    Rentz, Michael F; Ruffner, Andrew H; Ancona, Rachel M; Hart, Kimberly W; Kues, John R; Barczak, Christopher M; Lindsell, Christopher J; Fichtenbaum, Carl J; Lyons, Michael S

    2017-11-23

    Healthcare settings screen broadly for HIV. Public health settings use social network and partner testing ("Transmission Network Targeting (TNT)") to select high-risk individuals based on their contacts. HIV screening and TNT systems are not integrated, and healthcare settings have not implemented TNT. The study aimed to evaluate pilot implementation of multi-component, multi-venue TNT in conjunction with HIV screening by a healthcare setting. Our urban, academic health center implemented a TNT program in collaboration with the local health department for five months during 2011. High-risk or HIV positive patients of the infectious diseases clinic and emergency department HIV screening program were recruited to access social and partner networks via compensated peer-referral, testing of companions present with them, and partner notification services. Contacts became the next-generation index cases in a snowball recruitment strategy. The pilot TNT program yielded 485 HIV tests for 482 individuals through eight generations of recruitment with five (1.0%; 95% CI = 0.4%, 2.3%) new diagnoses. Of these, 246 (51.0%; 95% CI = 46.6%, 55.5%) reported that they had not been tested for HIV within the last 12 months and 383 (79.5%; 95% CI = 75.7%, 82.9%) had not been tested by the existing ED screening program within the last five years. TNT complements population screening by more directly targeting high-risk individuals and by expanding the population receiving testing. Information from existing healthcare services could be used to seed TNT programs, or TNT could be implemented within healthcare settings. Research evaluating multi-component, multi-venue HIV detection is necessary to maximize complementary approaches while minimizing redundancy. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  2. Cross-Study Homogeneity of Psoriasis Gene Expression in Skin across a Large Expression Range

    PubMed Central

    Kerkof, Keith; Timour, Martin; Russell, Christopher B.

    2013-01-01

    Background In psoriasis, only limited overlap between sets of genes identified as differentially expressed (psoriatic lesional vs. psoriatic non-lesional) was found using statistical and fold-change cut-offs. To provide a framework for utilizing prior psoriasis data sets we sought to understand the consistency of those sets. Methodology/Principal Findings Microarray expression profiling and qRT-PCR were used to characterize gene expression in PP and PN skin from psoriasis patients. cDNA (three new data sets) and cRNA hybridization (four existing data sets) data were compared using a common analysis pipeline. Agreement between data sets was assessed using varying qualitative and quantitative cut-offs to generate a DEG list in a source data set and then using other data sets to validate the list. Concordance increased from 67% across all probe sets to over 99% across more than 10,000 probe sets when statistical filters were employed. The fold-change behavior of individual genes tended to be consistent across the multiple data sets. We found that genes with <2-fold change values were quantitatively reproducible between pairs of data-sets. In a subset of transcripts with a role in inflammation changes detected by microarray were confirmed by qRT-PCR with high concordance. For transcripts with both PN and PP levels within the microarray dynamic range, microarray and qRT-PCR were quantitatively reproducible, including minimal fold-changes in IL13, TNFSF11, and TNFRSF11B and genes with >10-fold changes in either direction such as CHRM3, IL12B and IFNG. Conclusions/Significance Gene expression changes in psoriatic lesions were consistent across different studies, despite differences in patient selection, sample handling, and microarray platforms but between-study comparisons showed stronger agreement within than between platforms. We could use cut-offs as low as log10(ratio) = 0.1 (fold-change = 1.26), generating larger gene lists that validate on independent data sets. The reproducibility of PP signatures across data sets suggests that different sample sets can be productively compared. PMID:23308107

  3. Multiple site receptor modeling with a minimal spanning tree combined with a Kohonen neural network

    NASA Astrophysics Data System (ADS)

    Hopke, Philip K.

    1999-12-01

    A combination of two pattern recognition methods has been developed that allows the generation of geographical emission maps form multivariate environmental data. In such a projection into a visually interpretable subspace by a Kohonen Self-Organizing Feature Map, the topology of the higher dimensional variables space can be preserved, but parts of the information about the correct neighborhood among the sample vectors will be lost. This can partly be compensated for by an additional projection of Prim's Minimal Spanning Tree into the trained neural network. This new environmental receptor modeling technique has been adapted for multiple sampling sites. The behavior of the method has been studied using simulated data. Subsequently, the method has been applied to mapping data sets from the Southern California Air Quality Study. The projection of a 17 chemical variables measured at up to 8 sampling sites provided a 2D, visually interpretable, geometrically reasonable arrangement of air pollution source sin the South Coast Air Basin.

  4. Scheduling Non-Preemptible Jobs to Minimize Peak Demand

    DOE PAGES

    Yaw, Sean; Mumey, Brendan

    2017-10-28

    Our paper examines an important problem in smart grid energy scheduling; peaks in power demand are proportionally more expensive to generate and provision for. The issue is exacerbated in local microgrids that do not benefit from the aggregate smoothing experienced by large grids. Demand-side scheduling can reduce these peaks by taking advantage of the fact that there is often flexibility in job start times. We then focus attention on the case where the jobs are non-preemptible, meaning once started, they run to completion. The associated optimization problem is called the peak demand minimization problem, and has been previously shown tomore » be NP-hard. These results include an optimal fixed-parameter tractable algorithm, a polynomial-time approximation algorithm, as well as an effective heuristic that can also be used in an online setting of the problem. Simulation results show that these methods can reduce peak demand by up to 50% versus on-demand scheduling for household power jobs.« less

  5. Becoming a nurse faculty leader: doing your homework to minimize risk taking.

    PubMed

    Pearsall, Catherine; Pardue, Karen T; Horton-Deutsch, Sara; Young, Patricia K; Halstead, Judith; Nelson, Kristine A; Morales, Mary Lou; Zungolo, Eileen

    2014-01-01

    Risk taking is an important aspect of academic leadership; yet, how does taking risks shape leadership development, and what are the practices of risk taking in nurse faculty leaders? This interpretative phenomenological study examines the meaning and experience of risk taking among formal and informal nurse faculty leaders. The theme of doing your homework is generated through in-depth hermeneutic analysis of 14 interview texts and 2 focus group narratives. The practice of doing one's homework is captured in weighing costs and benefits, learning the context, and cultivating relationships. This study develops an evidence base for incorporating ways of doing one's homework into leadership development activities at a time when there is a tremendous need for nurse leaders in academic settings. Examining the practices of doing one's homework to minimize risk as a part of leadership development provides a foundation for cultivating nurse leaders who, in turn, are able to support and build leadership capacity in others. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Scheduling Non-Preemptible Jobs to Minimize Peak Demand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yaw, Sean; Mumey, Brendan

    Our paper examines an important problem in smart grid energy scheduling; peaks in power demand are proportionally more expensive to generate and provision for. The issue is exacerbated in local microgrids that do not benefit from the aggregate smoothing experienced by large grids. Demand-side scheduling can reduce these peaks by taking advantage of the fact that there is often flexibility in job start times. We then focus attention on the case where the jobs are non-preemptible, meaning once started, they run to completion. The associated optimization problem is called the peak demand minimization problem, and has been previously shown tomore » be NP-hard. These results include an optimal fixed-parameter tractable algorithm, a polynomial-time approximation algorithm, as well as an effective heuristic that can also be used in an online setting of the problem. Simulation results show that these methods can reduce peak demand by up to 50% versus on-demand scheduling for household power jobs.« less

  7. Stable Local Volatility Calibration Using Kernel Splines

    NASA Astrophysics Data System (ADS)

    Coleman, Thomas F.; Li, Yuying; Wang, Cheng

    2010-09-01

    We propose an optimization formulation using L1 norm to ensure accuracy and stability in calibrating a local volatility function for option pricing. Using a regularization parameter, the proposed objective function balances the calibration accuracy with the model complexity. Motivated by the support vector machine learning, the unknown local volatility function is represented by a kernel function generating splines and the model complexity is controlled by minimizing the 1-norm of the kernel coefficient vector. In the context of the support vector regression for function estimation based on a finite set of observations, this corresponds to minimizing the number of support vectors for predictability. We illustrate the ability of the proposed approach to reconstruct the local volatility function in a synthetic market. In addition, based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is simple and resembles the observed implied volatility surface in shape. Stability is illustrated by calibrating local volatility functions using market option data from different dates.

  8. Applying linear programming to estimate fluxes in ecosystems or food webs: An example from the herpetological assemblage of the freshwater Everglades

    USGS Publications Warehouse

    Diffendorfer, James E.; Richards, Paul M.; Dalrymple, George H.; DeAngelis, Donald L.

    2001-01-01

    We present the application of Linear Programming for estimating biomass fluxes in ecosystem and food web models. We use the herpetological assemblage of the Everglades as an example. We developed food web structures for three common Everglades freshwater habitat types: marsh, prairie, and upland. We obtained a first estimate of the fluxes using field data, literature estimates, and professional judgment. Linear programming was used to obtain a consistent and better estimate of the set of fluxes, while maintaining mass balance and minimizing deviations from point estimates. The results support the view that the Everglades is a spatially heterogeneous system, with changing patterns of energy flux, species composition, and biomasses across the habitat types. We show that a food web/ecosystem perspective, combined with Linear Programming, is a robust method for describing food webs and ecosystems that requires minimal data, produces useful post-solution analyses, and generates hypotheses regarding the structure of energy flow in the system.

  9. First-order kinetic gas generation model parameters for wet landfills.

    PubMed

    Faour, Ayman A; Reinhart, Debra R; You, Huaxin

    2007-01-01

    Landfill gas collection data from wet landfill cells were analyzed and first-order gas generation model parameters were estimated for the US EPA landfill gas emissions model (LandGEM). Parameters were determined through statistical comparison of predicted and actual gas collection. The US EPA LandGEM model appeared to fit the data well, provided it is preceded by a lag phase, which on average was 1.5 years. The first-order reaction rate constant, k, and the methane generation potential, L(o), were estimated for a set of landfills with short-term waste placement and long-term gas collection data. Mean and 95% confidence parameter estimates for these data sets were found using mixed-effects model regression followed by bootstrap analysis. The mean values for the specific methane volume produced during the lag phase (V(sto)), L(o), and k were 33 m(3)/Megagrams (Mg), 76 m(3)/Mg, and 0.28 year(-1), respectively. Parameters were also estimated for three full scale wet landfills where waste was placed over many years. The k and L(o) estimated for these landfills were 0.21 year(-1), 115 m(3)/Mg, 0.11 year(-1), 95 m(3)/Mg, and 0.12 year(-1) and 87 m(3)/Mg, respectively. A group of data points from wet landfills cells with short-term data were also analyzed. A conservative set of parameter estimates was suggested based on the upper 95% confidence interval parameters as a k of 0.3 year(-1) and a L(o) of 100 m(3)/Mg if design is optimized and the lag is minimized.

  10. A minimization method on the basis of embedding the feasible set and the epigraph

    NASA Astrophysics Data System (ADS)

    Zabotin, I. Ya; Shulgina, O. N.; Yarullin, R. S.

    2016-11-01

    We propose a conditional minimization method of the convex nonsmooth function which belongs to the class of cutting-plane methods. During constructing iteration points a feasible set and an epigraph of the objective function are approximated by the polyhedral sets. In this connection, auxiliary problems of constructing iteration points are linear programming problems. In optimization process there is some opportunity of updating sets which approximate the epigraph. These updates are performed by periodically dropping of cutting planes which form embedding sets. Convergence of the proposed method is proved, some realizations of the method are discussed.

  11. Identification of boiler inlet transfer functions and estimation of system parameters

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1972-01-01

    An iterative computer method is described for identifying boiler transfer functions using frequency response data. An objective penalized performance measure and a nonlinear minimization technique are used to cause the locus of points generated by a transfer function to resemble the locus of points obtained from frequency response measurements. Different transfer functions can be tried until a satisfactory empirical transfer function of the system is found. To illustrate the method, some examples and some results from a study of a set of data consisting of measurements of the inlet impedance of a single tube forced flow boiler with inserts are given.

  12. Design of a bio-inspired controller for dynamic soaring in a simulated unmanned aerial vehicle.

    PubMed

    Barate, Renaud; Doncieux, Stéphane; Meyer, Jean-Arcady

    2006-09-01

    This paper is inspired by the way birds such as albatrosses are able to exploit wind gradients at the surface of the ocean for staying aloft for very long periods while minimizing their energy expenditure. The corresponding behaviour has been partially reproduced here via a set of Takagi-Sugeno-Kang fuzzy rules controlling a simulated glider. First, the rules were hand-designed. Then, they were optimized with an evolutionary algorithm that improved their efficiency at coping with challenging conditions. Finally, the robustness properties of the controller generated were assessed with a view to its applicability to a real platform.

  13. Risk factors for Apgar score using artificial neural networks.

    PubMed

    Ibrahim, Doaa; Frize, Monique; Walker, Robin C

    2006-01-01

    Artificial Neural Networks (ANNs) have been used in identifying the risk factors for many medical outcomes. In this paper, the risk factors for low Apgar score are introduced. This is the first time, to our knowledge, that the ANNs are used for Apgar score prediction. The medical domain of interest used is the perinatal database provided by the Perinatal Partnership Program of Eastern and Southeastern Ontario (PPPESO). The ability of the feed forward back propagation ANNs to generate strong predictive model with the most influential variables is tested. Finally, minimal sets of variables (risk factors) that are important in predicting Apgar score outcome without degrading the ANN performance are identified.

  14. Systems biology perspectives on minimal and simpler cells.

    PubMed

    Xavier, Joana C; Patil, Kiran Raosaheb; Rocha, Isabel

    2014-09-01

    The concept of the minimal cell has fascinated scientists for a long time, from both fundamental and applied points of view. This broad concept encompasses extreme reductions of genomes, the last universal common ancestor (LUCA), the creation of semiartificial cells, and the design of protocells and chassis cells. Here we review these different areas of research and identify common and complementary aspects of each one. We focus on systems biology, a discipline that is greatly facilitating the classical top-down and bottom-up approaches toward minimal cells. In addition, we also review the so-called middle-out approach and its contributions to the field with mathematical and computational models. Owing to the advances in genomics technologies, much of the work in this area has been centered on minimal genomes, or rather minimal gene sets, required to sustain life. Nevertheless, a fundamental expansion has been taking place in the last few years wherein the minimal gene set is viewed as a backbone of a more complex system. Complementing genomics, progress is being made in understanding the system-wide properties at the levels of the transcriptome, proteome, and metabolome. Network modeling approaches are enabling the integration of these different omics data sets toward an understanding of the complex molecular pathways connecting genotype to phenotype. We review key concepts central to the mapping and modeling of this complexity, which is at the heart of research on minimal cells. Finally, we discuss the distinction between minimizing the number of cellular components and minimizing cellular complexity, toward an improved understanding and utilization of minimal and simpler cells. Copyright © 2014, American Society for Microbiology. All Rights Reserved.

  15. Increasingly minimal bias routing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bataineh, Abdulla; Court, Thomas; Roweth, Duncan

    2017-02-21

    A system and algorithm configured to generate diversity at the traffic source so that packets are uniformly distributed over all of the available paths, but to increase the likelihood of taking a minimal path with each hop the packet takes. This is achieved by configuring routing biases so as to prefer non-minimal paths at the injection point, but increasingly prefer minimal paths as the packet proceeds, referred to herein as Increasing Minimal Bias (IMB).

  16. Efficient Z gates for quantum computing

    NASA Astrophysics Data System (ADS)

    McKay, David C.; Wood, Christopher J.; Sheldon, Sarah; Chow, Jerry M.; Gambetta, Jay M.

    2017-08-01

    For superconducting qubits, microwave pulses drive rotations around the Bloch sphere. The phase of these drives can be used to generate zero-duration arbitrary virtual Z gates, which, combined with two Xπ /2 gates, can generate any SU(2) gate. Here we show how to best utilize these virtual Z gates to both improve algorithms and correct pulse errors. We perform randomized benchmarking using a Clifford set of Hadamard and Z gates and show that the error per Clifford is reduced versus a set consisting of standard finite-duration X and Y gates. Z gates can correct unitary rotation errors for weakly anharmonic qubits as an alternative to pulse-shaping techniques such as derivative removal by adiabatic gate (DRAG). We investigate leakage and show that a combination of DRAG pulse shaping to minimize leakage and Z gates to correct rotation errors realizes a 13.3 ns Xπ /2 gate characterized by low error [1.95 (3 ) ×10-4] and low leakage [3.1 (6 ) ×10-6] . Ultimately leakage is limited by the finite temperature of the qubit, but this limit is two orders of magnitude smaller than pulse errors due to decoherence.

  17. 40 CFR 262.27 - Waste minimization certification.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 26 2011-07-01 2011-07-01 false Waste minimization certification. 262.27 Section 262.27 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS APPLICABLE TO GENERATORS OF HAZARDOUS WASTE The Manifest § 262.27 Waste minimization...

  18. 40 CFR 262.27 - Waste minimization certification.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 27 2012-07-01 2012-07-01 false Waste minimization certification. 262.27 Section 262.27 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS APPLICABLE TO GENERATORS OF HAZARDOUS WASTE The Manifest § 262.27 Waste minimization...

  19. 40 CFR 262.27 - Waste minimization certification.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 25 2010-07-01 2010-07-01 false Waste minimization certification. 262.27 Section 262.27 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS APPLICABLE TO GENERATORS OF HAZARDOUS WASTE The Manifest § 262.27 Waste minimization...

  20. 40 CFR 262.27 - Waste minimization certification.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 27 2013-07-01 2013-07-01 false Waste minimization certification. 262.27 Section 262.27 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS APPLICABLE TO GENERATORS OF HAZARDOUS WASTE The Manifest § 262.27 Waste minimization...

  1. 40 CFR 262.27 - Waste minimization certification.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 26 2014-07-01 2014-07-01 false Waste minimization certification. 262.27 Section 262.27 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS APPLICABLE TO GENERATORS OF HAZARDOUS WASTE The Manifest § 262.27 Waste minimization...

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, C. Shan; Hayworth, Kenneth J.; Lu, Zhiyuan

    Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 10 6 ?m 3 . These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronal processesmore » and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology.« less

  3. Third generation sfermion decays into Z and W gauge bosons: Full one-loop analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arhrib, Abdesslam; LPHEA, Departement de Physique, Faculte des Sciences-Semlalia, B.P. 2390 Marrakech; Benbrik, Rachid

    2005-05-01

    The complete one-loop radiative corrections to third-generation scalar fermions into gauge bosons Z and W{sup {+-}} is considered. We focus on f-tilde{sub 2}{yields}Zf-tilde{sub 1} and f-tilde{sub i}{yields}W{sup {+-}}f-tilde{sub j}{sup '}, f,f{sup '}=t,b. We include SUSY-QCD, QED, and full electroweak corrections. It is found that the electroweak corrections can be of the same order as the SUSY-QCD corrections. The two sets of corrections interfere destructively in some region of parameter space. The full one-loop correction can reach 10% in some supergravity scenario, while in model independent analysis like general the minimal supersymmetric standard model, the one-loop correction can reach 20% formore » large tan{beta} and large trilinear soft breaking terms A{sub b}.« less

  4. NGSANE: a lightweight production informatics framework for high-throughput data analysis.

    PubMed

    Buske, Fabian A; French, Hugh J; Smith, Martin A; Clark, Susan J; Bauer, Denis C

    2014-05-15

    The initial steps in the analysis of next-generation sequencing data can be automated by way of software 'pipelines'. However, individual components depreciate rapidly because of the evolving technology and analysis methods, often rendering entire versions of production informatics pipelines obsolete. Constructing pipelines from Linux bash commands enables the use of hot swappable modular components as opposed to the more rigid program call wrapping by higher level languages, as implemented in comparable published pipelining systems. Here we present Next Generation Sequencing ANalysis for Enterprises (NGSANE), a Linux-based, high-performance-computing-enabled framework that minimizes overhead for set up and processing of new projects, yet maintains full flexibility of custom scripting when processing raw sequence data. Ngsane is implemented in bash and publicly available under BSD (3-Clause) licence via GitHub at https://github.com/BauerLab/ngsane. Denis.Bauer@csiro.au Supplementary data are available at Bioinformatics online.

  5. The Creative task Creator: a tool for the generation of customized, Web-based creativity tasks.

    PubMed

    Pretz, Jean E; Link, John A

    2008-11-01

    This article presents a Web-based tool for the creation of divergent-thinking and open-ended creativity tasks. A Java program generates HTML forms with PHP scripting that run an Alternate Uses Task and/or open-ended response items. Researchers may specify their own instructions, objects, and time limits, or use default settings. Participants can also be prompted to select their best responses to the Alternate Uses Task (Silvia et al., 2008). Minimal programming knowledge is required. The program runs on any server, and responses are recorded in a standard MySQL database. Responses can be scored using the consensual assessment technique (Amabile, 1996) or Torrance's (1998) traditional scoring method. Adoption of this Web-based tool should facilitate creativity research across cultures and access to eminent creators. The Creative Task Creator may be downloaded from the Psychonomic Society's Archive of Norms, Stimuli, and Data, www.psychonomic.org/archive.

  6. Resolving task rule incongruence during task switching by competitor rule suppression.

    PubMed

    Meiran, Nachshon; Hsieh, Shulan; Dimov, Eduard

    2010-07-01

    Task switching requires maintaining readiness to execute any task of a given set of tasks. However, when tasks switch, the readiness to execute the now-irrelevant task generates interference, as seen in the task rule incongruence effect. Overcoming such interference requires fine-tuned inhibition that impairs task readiness only minimally. In an experiment involving 2 object classification tasks and 2 location classification tasks, the authors show that irrelevant task rules that generate response conflicts are inhibited. This competitor rule suppression (CRS) is seen in response slowing in subsequent trials, when the competing rules become relevant. CRS is shown to operate on specific rules without affecting similar rules. CRS and backward inhibition, which is another inhibitory phenomenon, produced additive effects on reaction time, suggesting their mutual independence. Implications for current formal theories of task switching as well as for conflict monitoring theories are discussed. (c) 2010 APA, all rights reserved

  7. Stacking the odds for Golgi cisternal maturation

    PubMed Central

    Mani, Somya; Thattai, Mukund

    2016-01-01

    What is the minimal set of cell-biological ingredients needed to generate a Golgi apparatus? The compositions of eukaryotic organelles arise through a process of molecular exchange via vesicle traffic. Here we statistically sample tens of thousands of homeostatic vesicle traffic networks generated by realistic molecular rules governing vesicle budding and fusion. Remarkably, the plurality of these networks contain chains of compartments that undergo creation, compositional maturation, and dissipation, coupled by molecular recycling along retrograde vesicles. This motif precisely matches the cisternal maturation model of the Golgi, which was developed to explain many observed aspects of the eukaryotic secretory pathway. In our analysis cisternal maturation is a robust consequence of vesicle traffic homeostasis, independent of the underlying details of molecular interactions or spatial stacking. This architecture may have been exapted rather than selected for its role in the secretion of large cargo. DOI: http://dx.doi.org/10.7554/eLife.16231.001 PMID:27542195

  8. Enhanced FIB-SEM systems for large-volume 3D imaging.

    PubMed

    Xu, C Shan; Hayworth, Kenneth J; Lu, Zhiyuan; Grob, Patricia; Hassan, Ahmed M; García-Cerdán, José G; Niyogi, Krishna K; Nogales, Eva; Weinberg, Richard J; Hess, Harald F

    2017-05-13

    Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 10 6 µm 3 . These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronal processes and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology.

  9. Gardenification of tropical conserved wildlands: Multitasking, multicropping, and multiusers

    PubMed Central

    Janzen, Daniel

    1999-01-01

    Tropical wildlands and their biodiversity will survive in perpetuity only through their integration into human society. One protocol for integration is to explicitly recognize conserved tropical wildlands as wildland gardens. A major way to facilitate the generation of goods and services by a wildland garden is to generate a public-domain Yellow Pages for its organisms. Such a Yellow Pages is part and parcel of high-quality search-and-delivery from wildland gardens. And, as they and their organisms become better understood, they become higher quality biodiversity storage devices than are large freezers. One obstacle to wildland garden survival is that specific goods and services, such as biodiversity prospecting, lack development protocols that automatically shunt the profits back to the source. Other obstacles are that environmental services contracts have the unappealing trait of asking for the payment of environmental credit card bills and implying delegation of centralized governmental authority to decentralized social structures. Many of the potential conflicts associated with wildland gardens may be reduced by recognizing two sets of social rules for perpetuating biodiversity and ecosystems, one set for the wildland garden and one set for the agroscape. In the former, maintaining wildland biodiversity and ecosystem survival in perpetuity through minimally damaging use is paramount, while in the agroscape, wild biodiversity and ecosystems are tools for a healthy and productive agroecosystem, and the loss of much of the original is acceptable. PMID:10339529

  10. A fuzzy neural network for intelligent data processing

    NASA Astrophysics Data System (ADS)

    Xie, Wei; Chu, Feng; Wang, Lipo; Lim, Eng Thiam

    2005-03-01

    In this paper, we describe an incrementally generated fuzzy neural network (FNN) for intelligent data processing. This FNN combines the features of initial fuzzy model self-generation, fast input selection, partition validation, parameter optimization and rule-base simplification. A small FNN is created from scratch -- there is no need to specify the initial network architecture, initial membership functions, or initial weights. Fuzzy IF-THEN rules are constantly combined and pruned to minimize the size of the network while maintaining accuracy; irrelevant inputs are detected and deleted, and membership functions and network weights are trained with a gradient descent algorithm, i.e., error backpropagation. Experimental studies on synthesized data sets demonstrate that the proposed Fuzzy Neural Network is able to achieve accuracy comparable to or higher than both a feedforward crisp neural network, i.e., NeuroRule, and a decision tree, i.e., C4.5, with more compact rule bases for most of the data sets used in our experiments. The FNN has achieved outstanding results for cancer classification based on microarray data. The excellent classification result for Small Round Blue Cell Tumors (SRBCTs) data set is shown. Compared with other published methods, we have used a much fewer number of genes for perfect classification, which will help researchers directly focus their attention on some specific genes and may lead to discovery of deep reasons of the development of cancers and discovery of drugs.

  11. A depth-first search algorithm to compute elementary flux modes by linear programming

    PubMed Central

    2014-01-01

    Background The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (<400 reactions), existing approaches based on the Double Description method must iterate through a large number of combinatorial candidates, thus imposing an immense processor and memory demand. Results Based on an alternative elementarity test, we developed a depth-first search algorithm using linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. Conclusions The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints. PMID:25074068

  12. Fermion hierarchy from sfermion anarchy

    DOE PAGES

    Altmannshofer, Wolfgang; Frugiuele, Claudia; Harnik, Roni

    2014-12-31

    We present a framework to generate the hierarchical flavor structure of Standard Model quarks and leptons from loops of superpartners. The simplest model consists of the minimal supersymmetric standard model with tree level Yukawa couplings for the third generation only and anarchic squark and slepton mass matrices. Agreement with constraints from low energy flavor observables, in particular Kaon mixing, is obtained for supersymmetric particles with masses at the PeV scale or above. In our framework both the second and the first generation fermion masses are generated at 1-loop. Despite this, a novel mechanism generates a hierarchy among the first andmore » second generations without imposing a symmetry or small parameters. A second-to-first generation mass ratio of order 100 is typical. The minimal supersymmetric standard model thus includes all the necessary ingredients to realize a fermion spectrum that is qualitatively similar to observation, with hierarchical masses and mixing. The minimal framework produces only a few quantitative discrepancies with observation, most notably the muon mass is too low. Furthermore, we discuss simple modifications which resolve this and also investigate the compatibility of our model with gauge and Yukawa coupling Unification.« less

  13. Using Adaptive Turnaround Documents to Electronically Acquire Structured Data in Clinical Settings

    PubMed Central

    Biondich, Paul G.; Anand, Vibha; Downs, Stephen M.; McDonald, Clement J.

    2003-01-01

    We developed adaptive turnaround documents (ATDs) to address longstanding challenges inherent in acquiring structured data at the point of care. These computer-generated paper forms both request and receive patient tailored information specifically for electronic storage. In our pilot, we evaluated the usability, accuracy, and user acceptance of an ATD designed to enrich a pediatric preventative care decision support system. The system had an overall digit recognition rate of 98.6% (95% CI: 98.3 to 98.9) and a marksense accuracy of 99.2% (95% CI: 99.1 to 99.3). More importantly, the system reliably extracted all data from 56.6% (95% CI: 53.3 to 59.9) of our pilot forms without the need for a verification step. These results translate to a minimal workflow burden to end users. This suggests that ATDs can serve as an inexpensive, workflow-sensitive means of structured data acquisition in the clinical setting. PMID:14728139

  14. A Temporal Pattern Mining Approach for Classifying Electronic Health Record Data

    PubMed Central

    Batal, Iyad; Valizadegan, Hamed; Cooper, Gregory F.; Hauskrecht, Milos

    2013-01-01

    We study the problem of learning classification models from complex multivariate temporal data encountered in electronic health record systems. The challenge is to define a good set of features that are able to represent well the temporal aspect of the data. Our method relies on temporal abstractions and temporal pattern mining to extract the classification features. Temporal pattern mining usually returns a large number of temporal patterns, most of which may be irrelevant to the classification task. To address this problem, we present the Minimal Predictive Temporal Patterns framework to generate a small set of predictive and non-spurious patterns. We apply our approach to the real-world clinical task of predicting patients who are at risk of developing heparin induced thrombocytopenia. The results demonstrate the benefit of our approach in efficiently learning accurate classifiers, which is a key step for developing intelligent clinical monitoring systems. PMID:25309815

  15. Parametric embedding for class visualization.

    PubMed

    Iwata, Tomoharu; Saito, Kazumi; Ueda, Naonori; Stromsten, Sean; Griffiths, Thomas L; Tenenbaum, Joshua B

    2007-09-01

    We propose a new method, parametric embedding (PE), that embeds objects with the class structure into a low-dimensional visualization space. PE takes as input a set of class conditional probabilities for given data points and tries to preserve the structure in an embedding space by minimizing a sum of Kullback-Leibler divergences, under the assumption that samples are generated by a gaussian mixture with equal covariances in the embedding space. PE has many potential uses depending on the source of the input data, providing insight into the classifier's behavior in supervised, semisupervised, and unsupervised settings. The PE algorithm has a computational advantage over conventional embedding methods based on pairwise object relations since its complexity scales with the product of the number of objects and the number of classes. We demonstrate PE by visualizing supervised categorization of Web pages, semisupervised categorization of digits, and the relations of words and latent topics found by an unsupervised algorithm, latent Dirichlet allocation.

  16. Method and apparatus for generating motor current spectra to enhance motor system fault detection

    DOEpatents

    Linehan, Daniel J.; Bunch, Stanley L.; Lyster, Carl T.

    1995-01-01

    A method and circuitry for sampling periodic amplitude modulations in a nonstationary periodic carrier wave to determine frequencies in the amplitude modulations. The method and circuit are described in terms of an improved motor current signature analysis. The method insures that the sampled data set contains an exact whole number of carrier wave cycles by defining the rate at which samples of motor current data are collected. The circuitry insures that a sampled data set containing stationary carrier waves is recreated from the analog motor current signal containing nonstationary carrier waves by conditioning the actual sampling rate to adjust with the frequency variations in the carrier wave. After the sampled data is transformed to the frequency domain via the Discrete Fourier Transform, the frequency distribution in the discrete spectra of those components due to the carrier wave and its harmonics will be minimized so that signals of interest are more easily analyzed.

  17. Decision support for operations and maintenance (DSOM) system

    DOEpatents

    Jarrell, Donald B [Kennewick, WA; Meador, Richard J [Richland, WA; Sisk, Daniel R [Richland, WA; Hatley, Darrel D [Kennewick, WA; Brown, Daryl R [Richland, WA; Keibel, Gary R [Richland, WA; Gowri, Krishnan [Richland, WA; Reyes-Spindola, Jorge F [Richland, WA; Adams, Kevin J [San Bruno, CA; Yates, Kenneth R [Lake Oswego, OR; Eschbach, Elizabeth J [Fort Collins, CO; Stratton, Rex C [Richland, WA

    2006-03-21

    A method for minimizing the life cycle cost of processes such as heating a building. The method utilizes sensors to monitor various pieces of equipment used in the process, for example, boilers, turbines, and the like. The method then performs the steps of identifying a set optimal operating conditions for the process, identifying and measuring parameters necessary to characterize the actual operating condition of the process, validating data generated by measuring those parameters, characterizing the actual condition of the process, identifying an optimal condition corresponding to the actual condition, comparing said optimal condition with the actual condition and identifying variances between the two, and drawing from a set of pre-defined algorithms created using best engineering practices, an explanation of at least one likely source and at least one recommended remedial action for selected variances, and providing said explanation as an output to at least one user.

  18. Catalytic reactor

    DOEpatents

    Aaron, Timothy Mark [East Amherst, NY; Shah, Minish Mahendra [East Amherst, NY; Jibb, Richard John [Amherst, NY

    2009-03-10

    A catalytic reactor is provided with one or more reaction zones each formed of set(s) of reaction tubes containing a catalyst to promote chemical reaction within a feed stream. The reaction tubes are of helical configuration and are arranged in a substantially coaxial relationship to form a coil-like structure. Heat exchangers and steam generators can be formed by similar tube arrangements. In such manner, the reaction zone(s) and hence, the reactor is compact and the pressure drop through components is minimized. The resultant compact form has improved heat transfer characteristics and is far easier to thermally insulate than prior art compact reactor designs. Various chemical reactions are contemplated within such coil-like structures such that as steam methane reforming followed by water-gas shift. The coil-like structures can be housed within annular chambers of a cylindrical housing that also provide flow paths for various heat exchange fluids to heat and cool components.

  19. Bayesian learning of visual chunks by human observers

    PubMed Central

    Orbán, Gergő; Fiser, József; Aslin, Richard N.; Lengyel, Máté

    2008-01-01

    Efficient and versatile processing of any hierarchically structured information requires a learning mechanism that combines lower-level features into higher-level chunks. We investigated this chunking mechanism in humans with a visual pattern-learning paradigm. We developed an ideal learner based on Bayesian model comparison that extracts and stores only those chunks of information that are minimally sufficient to encode a set of visual scenes. Our ideal Bayesian chunk learner not only reproduced the results of a large set of previous empirical findings in the domain of human pattern learning but also made a key prediction that we confirmed experimentally. In accordance with Bayesian learning but contrary to associative learning, human performance was well above chance when pair-wise statistics in the exemplars contained no relevant information. Thus, humans extract chunks from complex visual patterns by generating accurate yet economical representations and not by encoding the full correlational structure of the input. PMID:18268353

  20. Structural and conformational determinants of macrocycle cell permeability.

    PubMed

    Over, Björn; Matsson, Pär; Tyrchan, Christian; Artursson, Per; Doak, Bradley C; Foley, Michael A; Hilgendorf, Constanze; Johnston, Stephen E; Lee, Maurice D; Lewis, Richard J; McCarren, Patrick; Muncipinto, Giovanni; Norinder, Ulf; Perry, Matthew W D; Duvall, Jeremy R; Kihlberg, Jan

    2016-12-01

    Macrocycles are of increasing interest as chemical probes and drugs for intractable targets like protein-protein interactions, but the determinants of their cell permeability and oral absorption are poorly understood. To enable rational design of cell-permeable macrocycles, we generated an extensive data set under consistent experimental conditions for more than 200 non-peptidic, de novo-designed macrocycles from the Broad Institute's diversity-oriented screening collection. This revealed how specific functional groups, substituents and molecular properties impact cell permeability. Analysis of energy-minimized structures for stereo- and regioisomeric sets provided fundamental insight into how dynamic, intramolecular interactions in the 3D conformations of macrocycles may be linked to physicochemical properties and permeability. Combined use of quantitative structure-permeability modeling and the procedure for conformational analysis now, for the first time, provides chemists with a rational approach to design cell-permeable non-peptidic macrocycles with potential for oral absorption.

  1. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF PRINTED LABELS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at sel...

  2. Waste minimization/pollution prevention study of high-priority waste streams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogle, R.B.

    1994-03-01

    Although waste minimization has been practiced by the Metals and Ceramics (M&C) Division in the past, the effort has not been uniform or formalized. To establish the groundwork for continuous improvement, the Division Director initiated a more formalized waste minimization and pollution prevention program. Formalization of the division`s pollution prevention efforts in fiscal year (FY) 1993 was initiated by a more concerted effort to determine the status of waste generation from division activities. The goal for this effort was to reduce or minimize the wastes identified as having the greatest impact on human health, the environment, and costs. Two broadmore » categories of division wastes were identified as solid/liquid wastes and those relating to energy use (primarily electricity and steam). This report presents information on the nonradioactive solid and liquid wastes generated by division activities. More specifically, the information presented was generated by teams of M&C staff members empowered by the Division Director to study specific waste streams.« less

  3. Subsurface and terrain controls on runoff generation in deep soil landscapes

    NASA Astrophysics Data System (ADS)

    Mallard, John; McGlynn, Brian; Richter, Daniel

    2017-04-01

    Our understanding of runoff generation in regions characterized by deep, highly weathered soils is incomplete despite the prevalence of this setting worldwide. To address this, we instrumented a first-order watershed in the Piedmont of South Carolina, USA. The Piedmont region of the United States extends east of the Appalachians from Maryland to Alabama, and is home to some of the most rapid population growth in the country. Regional and local relief is modest, although the landscape is highly dissected and local slope can be quite variable. The region's soils are ancient, deeply weathered, and characterized by sharp changes in hydrologic properties due to concentration of clay in the Bt horizon. Despite a mild climate and consistent precipitation, seasonally variable energy availability and deciduous tree cover create a strong evapotranspiration mediated seasonal hydrologic dynamic: while moist soils and extended stream networks are typical of the late fall through spring, relatively dry soils and contracting stream networks emerge in the summer and early fall. To elucidate the control of the complex vertical and planform structure of this region, as well as the strongly seasonal subsurface hydrology, on runoff generation, we installed a network of nested, shallow groundwater wells across an ephemeral to first-order watershed to continuously measure internal water levels. We also recorded local precipitation and discharge at the outlet of this watershed, a similar adjacent watershed, and in the second to third order downstream watershed. Subsurface water dynamics varied spatially, vertically, and seasonally. Shallow depths and landscape positions with minimal contributing area exhibited flashier dynamics comparable to the stream hydrographs while positions with more contributing area exhibited relatively muted dynamics. Most well positions showed minimal response to precipitation throughout the summer, and even occasionally observed response rarely co-occurred with streamflow generation. Our initial findings suggest that characterizing the terrain of a watershed must be coupled with the subsurface soil hydrology in order to understand spatiotemporal patterns of streamflow generation in regions possessing both complex vertical structure and terrain.

  4. Comprehensive simulation-enhanced training curriculum for an advanced minimally invasive procedure: a randomized controlled trial.

    PubMed

    Zevin, Boris; Dedy, Nicolas J; Bonrath, Esther M; Grantcharov, Teodor P

    2017-05-01

    There is no comprehensive simulation-enhanced training curriculum to address cognitive, psychomotor, and nontechnical skills for an advanced minimally invasive procedure. 1) To develop and provide evidence of validity for a comprehensive simulation-enhanced training (SET) curriculum for an advanced minimally invasive procedure; (2) to demonstrate transfer of acquired psychomotor skills from a simulation laboratory to live porcine model; and (3) to compare training outcomes of SET curriculum group and chief resident group. University. This prospective single-blinded, randomized, controlled trial allocated 20 intermediate-level surgery residents to receive either conventional training (control) or SET curriculum training (intervention). The SET curriculum consisted of cognitive, psychomotor, and nontechnical training modules. Psychomotor skills in a live anesthetized porcine model in the OR was the primary outcome. Knowledge of advanced minimally invasive and bariatric surgery and nontechnical skills in a simulated OR crisis scenario were the secondary outcomes. Residents in the SET curriculum group went on to perform a laparoscopic jejunojejunostomy in the OR. Cognitive, psychomotor, and nontechnical skills of SET curriculum group were also compared to a group of 12 chief surgery residents. SET curriculum group demonstrated superior psychomotor skills in a live porcine model (56 [47-62] versus 44 [38-53], P<.05) and superior nontechnical skills (41 [38-45] versus 31 [24-40], P<.01) compared with conventional training group. SET curriculum group and conventional training group demonstrated equivalent knowledge (14 [12-15] versus 13 [11-15], P = 0.47). SET curriculum group demonstrated equivalent psychomotor skills in the live porcine model and in the OR in a human patient (56 [47-62] versus 63 [61-68]; P = .21). SET curriculum group demonstrated inferior knowledge (13 [11-15] versus 16 [14-16]; P<.05), equivalent psychomotor skill (63 [61-68] versus 68 [62-74]; P = .50), and superior nontechnical skills (41 [38-45] versus 34 [27-35], P<.01) compared with chief resident group. Completion of the SET curriculum resulted in superior training outcomes, compared with conventional surgery training. Implementation of the SET curriculum can standardize training for an advanced minimally invasive procedure and can ensure that comprehensive proficiency milestones are met before exposure to patient care. Copyright © 2017 American Society for Bariatric Surgery. Published by Elsevier Inc. All rights reserved.

  5. Funtools: Fits Users Need Tools for Quick, Quantitative Analysis

    NASA Technical Reports Server (NTRS)

    Mandel, Eric; Brederkamp, Joe (Technical Monitor)

    2001-01-01

    The Funtools project arose out of conversations with astronomers about the decline in their software development efforts over the past decade. A stated reason for this decline is that it takes too much effort to master one of the existing FITS libraries simply in order to write a few analysis programs. This problem is exacerbated by the fact that astronomers typically develop new programs only occasionally, and the long interval between coding efforts often necessitates re-learning the FITS interfaces. We therefore set ourselves the goal of developing a minimal buy-in FITS library for researchers who are occasional (but serious) coders. In this case, "minimal buy-in" meant "easy to learn, easy to use, and easy to re-learn next month". Based on conversations with astronomers interested in writing code, we concluded that this goal could be achieved by emphasizing two essential capabilities. The first was the ability to write FITS programs without knowing much about FITS, i.e., without having to deal with the arcane rules for generating a properly formatted FITS file. The second was to support the use of already-familiar C/Unix facilities, especially C structs and Unix stdio. Taken together, these two capabilities would allow researchers to leverage their existing programming expertise while minimizing the need to learn new and complex coding rules.

  6. Scaling of phloem structure and optimality of photoassimilate transport in conifer needles.

    PubMed

    Ronellenfitsch, Henrik; Liesche, Johannes; Jensen, Kaare H; Holbrook, N Michele; Schulz, Alexander; Katifori, Eleni

    2015-02-22

    The phloem vascular system facilitates transport of energy-rich sugar and signalling molecules in plants, thus permitting long-range communication within the organism and growth of non-photosynthesizing organs such as roots and fruits. The flow is driven by osmotic pressure, generated by differences in sugar concentration between distal parts of the plant. The phloem is an intricate distribution system, and many questions about its regulation and structural diversity remain unanswered. Here, we investigate the phloem structure in the simplest possible geometry: a linear leaf, found, for example, in the needles of conifer trees. We measure the phloem structure in four tree species representing a diverse set of habitats and needle sizes, from 1 (Picea omorika) to 35 cm (Pinus palustris). We show that the phloem shares common traits across these four species and find that the size of its conductive elements obeys a power law. We present a minimal model that accounts for these common traits and takes into account the transport strategy and natural constraints. This minimal model predicts a power law phloem distribution consistent with transport energy minimization, suggesting that energetics are more important than translocation speed at the leaf level. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  7. Industrial wastewater minimization using water pinch analysis: a case study on an old textile plant.

    PubMed

    Ujang, Z; Wong, C L; Manan, Z A

    2002-01-01

    Industrial wastewater minimization can be conducted using four main strategies: (i) reuse; (ii) regeneration-reuse; (iii) regeneration-recycling; and (iv) process changes. This study is concerned with (i) and (ii) to investigate the most suitable approach to wastewater minimization for an old textile industry plant. A systematic water networks design using water pinch analysis (WPA) was developed to minimize the water usage and wastewater generation for the textile plant. COD was chosen as the main parameter. An integrated design method has been applied, which brings the engineering insight using WPA that can determine the minimum flowrate of the water usage and then minimize the water consumption and wastewater generation as well. The overall result of this study shows that WPA has been effectively applied using both reuse and regeneration-reuse strategies for the old textile industry plant, and reduced the operating cost by 16% and 50% respectively.

  8. Constraints on B and Higgs physics in minimal low energy supersymmetric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carena, Marcela; /Fermilab; Menon, A.

    2006-03-01

    We study the implications of minimal flavor violating low energy supersymmetry scenarios for the search of new physics in the B and Higgs sectors at the Tevatron collider and the LHC. We show that the already stringent Tevatron bound on the decay rate B{sub s} {yields} {mu}{sup +}{mu}{sup -} sets strong constraints on the possibility of generating large corrections to the mass difference {Delta} M{sub s} of the B{sub s} eigenstates. We also show that the B{sub s} {yields} {mu}{sup +}{mu}{sup -} bound together with the constraint on the branching ratio of the rare decay b {yields} s{gamma} has strongmore » implications for the search of light, non-standard Higgs bosons at hadron colliders. In doing this, we demonstrate that the former expressions derived for the analysis of the double penguin contributions in the Kaon sector need to be corrected by additional terms for a realistic analysis of these effects. We also study a specific non-minimal flavor violating scenario, where there are flavor changing gluino-squark-quark interactions, governed by the CKM matrix elements, and show that the B and Higgs physics constraints are similar to the ones in the minimal flavor violating case. Finally we show that, in scenarios like electroweak baryogenesis which have light stops and charginos, there may be enhanced effects on the B and K mixing parameters, without any significant effect on the rate of B{sub s} {yields} {mu}{sup +}{mu}{sup -}.« less

  9. Waste Minimization Assessment for Multilayered Printed Circuit Board Manufacturing

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium- size manu facturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at s...

  10. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A PAINT MANUFACTURING PLANT

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium- size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at se...

  11. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF REFURBISHED RAILCAR ASSEMBLIES

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at selected ...

  12. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF PROTOTYPE PRINTED CIRCUIT BOARDS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at sel...

  13. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF CAN-MANUFACTURING EQUIPMENT

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of hazardous waste but who lack the expertise to do so. aste Minimization Assessment Centers (WMACs) were established at ...

  14. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF SPEED REDUCTION EQUIPMENT

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at sel...

  15. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF CUSTOM MOLDED PLASTIC PRODUCTS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at selected ...

  16. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A BUMPER REFINISHING PLANT

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at se...

  17. Making the Most of Minimalism in Music.

    ERIC Educational Resources Information Center

    Geiersbach, Frederick J.

    1998-01-01

    Describes the minimalist movement in music. Discusses generations of minimalist musicians and, in general, the minimalist approach. Considers various ways that minimalist strategies can be integrated into the music classroom focusing on (1) minimalism and (2) student-centered composition and principles of minimalism for use with elementary band…

  18. Automation of a Nile red staining assay enables high throughput quantification of microalgal lipid production.

    PubMed

    Morschett, Holger; Wiechert, Wolfgang; Oldiges, Marco

    2016-02-09

    Within the context of microalgal lipid production for biofuels and bulk chemical applications, specialized higher throughput devices for small scale parallelized cultivation are expected to boost the time efficiency of phototrophic bioprocess development. However, the increasing number of possible experiments is directly coupled to the demand for lipid quantification protocols that enable reliably measuring large sets of samples within short time and that can deal with the reduced sample volume typically generated at screening scale. To meet these demands, a dye based assay was established using a liquid handling robot to provide reproducible high throughput quantification of lipids with minimized hands-on-time. Lipid production was monitored using the fluorescent dye Nile red with dimethyl sulfoxide as solvent facilitating dye permeation. The staining kinetics of cells at different concentrations and physiological states were investigated to successfully down-scale the assay to 96 well microtiter plates. Gravimetric calibration against a well-established extractive protocol enabled absolute quantification of intracellular lipids improving precision from ±8 to ±2 % on average. Implementation into an automated liquid handling platform allows for measuring up to 48 samples within 6.5 h, reducing hands-on-time to a third compared to manual operation. Moreover, it was shown that automation enhances accuracy and precision compared to manual preparation. It was revealed that established protocols relying on optical density or cell number for biomass adjustion prior to staining may suffer from errors due to significant changes of the cells' optical and physiological properties during cultivation. Alternatively, the biovolume was used as a measure for biomass concentration so that errors from morphological changes can be excluded. The newly established assay proved to be applicable for absolute quantification of algal lipids avoiding limitations of currently established protocols, namely biomass adjustment and limited throughput. Automation was shown to improve data reliability, as well as experimental throughput simultaneously minimizing the needed hands-on-time to a third. Thereby, the presented protocol meets the demands for the analysis of samples generated by the upcoming generation of devices for higher throughput phototrophic cultivation and thereby contributes to boosting the time efficiency for setting up algae lipid production processes.

  19. Are the major risk/need factors predictive of both female and male reoffending?: a test with the eight domains of the level of service/case management inventory.

    PubMed

    Andrews, Donald A; Guzzo, Lina; Raynor, Peter; Rowe, Robert C; Rettinger, L Jill; Brews, Albert; Wormith, J Stephen

    2012-02-01

    The Level of Service/Case Management Inventory (LS/CMI) and the Youth version (YLS/CMI) generate an assessment of risk/need across eight domains that are considered to be relevant for girls and boys and for women and men. Aggregated across five data sets, the predictive validity of each of the eight domains was gender-neutral. The composite total score (LS/CMI total risk/need) was strongly associated with the recidivism of males (mean r = .39, mean AUC = .746) and very strongly associated with the recidivism of females (mean r = .53, mean AUC = .827). The enhanced validity of LS total risk/need with females was traced to the exceptional validity of Substance Abuse with females. The intra-data set conclusions survived the introduction of two very large samples composed of female offenders exclusively. Finally, the mean incremental contributions of gender and the gender-by-risk level interactions in the prediction of criminal recidivism were minimal compared to the relatively strong validity of the LS/CMI risk level. Although the variance explained by gender was minimal and although high-risk cases were high-risk cases regardless of gender, the recidivism rates of lower risk females were lower than the recidivism rates of lower risk males, suggesting possible implications for test interpretation and policy.

  20. The Application of Infrared Thermographic Inspection Techniques to the Space Shuttle Thermal Protection System

    NASA Technical Reports Server (NTRS)

    Cramer, K. E.; Winfree, W. P.

    2005-01-01

    The Nondestructive Evaluation Sciences Branch at NASA s Langley Research Center has been actively involved in the development of thermographic inspection techniques for more than 15 years. Since the Space Shuttle Columbia accident, NASA has focused on the improvement of advanced NDE techniques for the Reinforced Carbon-Carbon (RCC) panels that comprise the orbiter s wing leading edge. Various nondestructive inspection techniques have been used in the examination of the RCC, but thermography has emerged as an effective inspection alternative to more traditional methods. Thermography is a non-contact inspection method as compared to ultrasonic techniques which typically require the use of a coupling medium between the transducer and material. Like radiographic techniques, thermography can be used to inspect large areas, but has the advantage of minimal safety concerns and the ability for single-sided measurements. Principal Component Analysis (PCA) has been shown effective for reducing thermographic NDE data. A typical implementation of PCA is when the eigenvectors are generated from the data set being analyzed. Although it is a powerful tool for enhancing the visibility of defects in thermal data, PCA can be computationally intense and time consuming when applied to the large data sets typical in thermography. Additionally, PCA can experience problems when very large defects are present (defects that dominate the field-of-view), since the calculation of the eigenvectors is now governed by the presence of the defect, not the "good" material. To increase the processing speed and to minimize the negative effects of large defects, an alternative method of PCA is being pursued where a fixed set of eigenvectors, generated from an analytic model of the thermal response of the material under examination, is used to process the thermal data from the RCC materials. Details of a one-dimensional analytic model and a two-dimensional finite-element model will be presented. An overview of the PCA process as well as a quantitative signal-to-noise comparison of the results of performing both embodiments of PCA on thermographic data from various RCC specimens will be shown. Finally, a number of different applications of this technology to various RCC components will be presented.

  1. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTUERE OF OUTDOOR ILLUMINATED SIGNS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium- size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at se...

  2. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF SHEET METAL COMPONENTS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. n an effort to assist these manufacturers Waste Minimization ssessment Cente...

  3. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF BRAZED ALUMINUM OIL COOLERS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at sel...

  4. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF ALUMINUM CANS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium- size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at se...

  5. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF PRINTED CIRCUIT BOARDS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium- size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at se...

  6. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF IRON CASTINGS AND FABRICATED SHEET METAL PARTS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at selected u...

  7. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION FOR A MANUFACTURER OF ALUMINUM AND STEEL PARTS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. n an effort to assist these manufacturers Waste Minimization Assessment Cent...

  8. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF ALUMINUM AND STEEL PARTS

    EPA Science Inventory

    The U.S.Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-sized manufacturers who want to minimize their generation of waste but who lack the expertise to do so. In an effort to assist these manufacturers, Waste Minimization Assessment Ce...

  9. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF PENNY BLANKS AND ZINC PRODUCTS

    EPA Science Inventory

    The U.S. EnvIronmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. aste Minimization Assessment Centers (WMACs) were established at selected u...

  10. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A METAL PARTS COATING PLANT

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at sel...

  11. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF CUTTING AND WELDING EQUIPMENT

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot program to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so in an effort to assist these manufacturers Waste Minimization Assessment Cent...

  12. WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF SILICON-CONTROLLED RECTIFIERS AND SCHOTTKY RECTIFIERS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. In an effort to assist these manufacturers Waste Minimization Assessment Ce...

  13. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF REBUILT RAILWAY CARS AND COMPONENTS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium- size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at se...

  14. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF PRINTED PLASTIC BAGS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of hazardous waste but who lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established ...

  15. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION FOR A MANUFACTURER OF COMPRESSED AIR EQUIPMENT COMPONENTS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at sel...

  16. Modified Shuffled Frog Leaping Optimization Algorithm Based Distributed Generation Rescheduling for Loss Minimization

    NASA Astrophysics Data System (ADS)

    Arya, L. D.; Koshti, Atul

    2018-05-01

    This paper investigates the Distributed Generation (DG) capacity optimization at location based on the incremental voltage sensitivity criteria for sub-transmission network. The Modified Shuffled Frog Leaping optimization Algorithm (MSFLA) has been used to optimize the DG capacity. Induction generator model of DG (wind based generating units) has been considered for study. Standard test system IEEE-30 bus has been considered for the above study. The obtained results are also validated by shuffled frog leaping algorithm and modified version of bare bones particle swarm optimization (BBExp). The performance of MSFLA has been found more efficient than the other two algorithms for real power loss minimization problem.

  17. Recovery Act: Johnston Rhode Island Combined Cycle Electric Generating Plant Fueled by Waste Landfill Gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galowitz, Stephen

    The primary objective of the Project was to maximize the productive use of the substantial quantities of waste landfill gas generated and collected at the Central Landfill in Johnston, Rhode Island. An extensive analysis was conducted and it was determined that utilization of the waste gas for power generation in a combustion turbine combined cycle facility was the highest and best use. The resulting project reflected a cost effective balance of the following specific sub-objectives. 1) Meet environmental and regulatory requirements, particularly the compliance obligations imposed on the landfill to collect, process and destroy landfill gas. 2) Utilize proven andmore » reliable technology and equipment. 3) Maximize electrical efficiency. 4) Maximize electric generating capacity, consistent with the anticipated quantities of landfill gas generated and collected at the Central Landfill. 5) Maximize equipment uptime. 6) Minimize water consumption. 7) Minimize post-combustion emissions. To achieve the Project Objective the project consisted of several components. 1) The landfill gas collection system was modified and upgraded. 2) A State-of-the Art gas clean up and compression facility was constructed. 3) A high pressure pipeline was constructed to convey cleaned landfill gas from the clean-up and compression facility to the power plant. 4) A combined cycle electric generating facility was constructed consisting of combustion turbine generator sets, heat recovery steam generators and a steam turbine. 5) The voltage of the electricity produced was increased at a newly constructed transformer/substation and the electricity was delivered to the local transmission system. The Project produced a myriad of beneficial impacts. 1) The Project created 453 FTE construction and manufacturing jobs and 25 FTE permanent jobs associated with the operation and maintenance of the plant and equipment. 2) By combining state-of-the-art gas clean up systems with post combustion emissions control systems, the Project established new national standards for best available control technology (BACT). 3) The Project will annually produce 365,292 MWh's of clean energy. 4) By destroying the methane in the landfill gas, the Project will generate CO{sub 2} equivalent reductions of 164,938 tons annually. The completed facility produces 28.3 MWnet and operates 24 hours a day, seven days a week.« less

  18. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    PubMed Central

    2014-01-01

    Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295

  19. Site-directed protein recombination as a shortest-path problem.

    PubMed

    Endelman, Jeffrey B; Silberg, Jonathan J; Wang, Zhen-Gang; Arnold, Frances H

    2004-07-01

    Protein function can be tuned using laboratory evolution, in which one rapidly searches through a library of proteins for the properties of interest. In site-directed recombination, n crossovers are chosen in an alignment of p parents to define a set of p(n + 1) peptide fragments. These fragments are then assembled combinatorially to create a library of p(n+1) proteins. We have developed a computational algorithm to enrich these libraries in folded proteins while maintaining an appropriate level of diversity for evolution. For a given set of parents, our algorithm selects crossovers that minimize the average energy of the library, subject to constraints on the length of each fragment. This problem is equivalent to finding the shortest path between nodes in a network, for which the global minimum can be found efficiently. Our algorithm has a running time of O(N(3)p(2) + N(2)n) for a protein of length N. Adjusting the constraints on fragment length generates a set of optimized libraries with varying degrees of diversity. By comparing these optima for different sets of parents, we rapidly determine which parents yield the lowest energy libraries.

  20. Space-ecology set covering problem for modeling Daiyun Mountain Reserve, China

    NASA Astrophysics Data System (ADS)

    Lin, Chih-Wei; Liu, Jinfu; Huang, Jiahang; Zhang, Huiguang; Lan, Siren; Hong, Wei; Li, Wenzhou

    2018-02-01

    Site selection is an important issue in designing the nature reserve that has been studied over the years. However, a well-balanced relationship between preservation of biodiversity and site selection is still challenging. Unlike the existing methods, we consider three critical components, the spatial continuity, spatial compactness and ecological information to address the problem of designing the reserve. In this paper, we propose a new mathematical model of set covering problem called Space-ecology Set Covering Problem (SeSCP) for designing a reserve network. First, we generate the ecological information by forest resource investigation. Then, we split the landscape into elementary cells and calculate the ecological score of each cell. Next, we associate the ecological information with the spatial properties to select a set of cells to form a nature reserve for improving the ability of protecting the biodiversity. Two spatial constraints, continuity and compactability, are given in SeSCP. The continuity is to ensure that any selected site has to be connected with adjacent sites and the compactability is to minimize the perimeter of the selected sites. In computational experiments, we take Daiyun Mountain as a study area to demonstrate the feasibility and effectiveness of the proposed model.

  1. Model-based setting of inspiratory pressure and respiratory rate in pressure-controlled ventilation.

    PubMed

    Schranz, C; Becher, T; Schädler, D; Weiler, N; Möller, K

    2014-03-01

    Mechanical ventilation carries the risk of ventilator-induced-lung-injury (VILI). To minimize the risk of VILI, ventilator settings should be adapted to the individual patient properties. Mathematical models of respiratory mechanics are able to capture the individual physiological condition and can be used to derive personalized ventilator settings. This paper presents model-based calculations of inspiration pressure (pI), inspiration and expiration time (tI, tE) in pressure-controlled ventilation (PCV) and a retrospective evaluation of its results in a group of mechanically ventilated patients. Incorporating the identified first order model of respiratory mechanics in the basic equation of alveolar ventilation yielded a nonlinear relation between ventilation parameters during PCV. Given this patient-specific relation, optimized settings in terms of minimal pI and adequate tE can be obtained. We then retrospectively analyzed data from 16 ICU patients with mixed pathologies, whose ventilation had been previously optimized by ICU physicians with the goal of minimization of inspiration pressure, and compared the algorithm's 'optimized' settings to the settings that had been chosen by the physicians. The presented algorithm visualizes the patient-specific relations between inspiration pressure and inspiration time. The algorithm's calculated results highly correlate to the physician's ventilation settings with r = 0.975 for the inspiration pressure, and r = 0.902 for the inspiration time. The nonlinear patient-specific relations of ventilation parameters become transparent and support the determination of individualized ventilator settings according to therapeutic goals. Thus, the algorithm is feasible for a variety of ventilated ICU patients and has the potential of improving lung-protective ventilation by minimizing inspiratory pressures and by helping to avoid the build-up of clinically significant intrinsic positive end-expiratory pressure.

  2. n-D shape/texture optimal synthetic description and modeling by GEOGINE

    NASA Astrophysics Data System (ADS)

    Fiorini, Rodolfo A.; Dacquino, Gianfranco F.

    2004-12-01

    GEOGINE(GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for multidimensional shape/texture optimal synthetic description and learning, is presented. Usually elementary geometric shape robust characterization, subjected to geometric transformation, on a rigorous mathematical level is a key problem in many computer applications in different interest areas. The past four decades have seen solutions almost based on the use of n-Dimensional Moment and Fourier descriptor invariants. The present paper introduces a new approach for automatic model generation based on n -Dimensional Tensor Invariants as formal dictionary. An ontological model is the kernel used for specifying ontologies so that how close an ontology can be from the real world depends on the possibilities offered by the ontological model. By this approach even chromatic information content can be easily and reliably decoupled from target geometric information and computed into robus colour shape parameter attributes. Main GEOGINEoperational advantages over previous approaches are: 1) Automated Model Generation, 2) Invariant Minimal Complete Set for computational efficiency, 3) Arbitrary Model Precision for robust object description.

  3. Supersonic liquid jets: Their generation and shock wave characteristics

    NASA Astrophysics Data System (ADS)

    Pianthong, K.; Zakrzewski, S.; Behnia, M.; Milton, B. E.

    The generation of high-speed liquid (water and diesel fuel) jets in the supersonic range using a vertical single-stage powder gun is described. The effect of projectile velocity and mass on the jet velocity is investigated experimentally. Jet exit velocities for a set of nozzle inner profiles (e.g. straight cone with different cone angles, exponential, hyperbolic etc.) are compared. The optimum condition to achieve the maximum jet velocity and hence better atomization and mixing is then determined. The visual images of supersonic diesel fuel jets (velocity about 2000 m/s) were obtained by the shadowgraph method. This provides better understanding of each stage of the generation of the jets and makes the study of their characteristics and the potential for auto-ignition possible. In the experiments, a pressure relief section has been used to minimize the compressed air wave ahead of the projectile. To clarify the processes inside the section, additional experiments have been performed with the use of the shadowgraph method, showing the projectile travelling inside and leaving the pressure relief section at a velocity of about 1100 m/s.

  4. MS Data Miner: a web-based software tool to analyze, compare, and share mass spectrometry protein identifications.

    PubMed

    Dyrlund, Thomas F; Poulsen, Ebbe T; Scavenius, Carsten; Sanggaard, Kristian W; Enghild, Jan J

    2012-09-01

    Data processing and analysis of proteomics data are challenging and time consuming. In this paper, we present MS Data Miner (MDM) (http://sourceforge.net/p/msdataminer), a freely available web-based software solution aimed at minimizing the time required for the analysis, validation, data comparison, and presentation of data files generated in MS software, including Mascot (Matrix Science), Mascot Distiller (Matrix Science), and ProteinPilot (AB Sciex). The program was developed to significantly decrease the time required to process large proteomic data sets for publication. This open sourced system includes a spectra validation system and an automatic screenshot generation tool for Mascot-assigned spectra. In addition, a Gene Ontology term analysis function and a tool for generating comparative Excel data reports are included. We illustrate the benefits of MDM during a proteomics study comprised of more than 200 LC-MS/MS analyses recorded on an AB Sciex TripleTOF 5600, identifying more than 3000 unique proteins and 3.5 million peptides. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling.

    PubMed

    Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief; Lilley, John; Sebag-Montefiore, David

    2018-07-01

    To present a fully automatic method to generate multiparameter normal tissue complication probability (NTCP) models and compare its results with those of a published model, using the same patient cohort. Data were analyzed from 345 rectal cancer patients treated with external radiation therapy to predict the risk of patients developing grade 1 or ≥2 cystitis. In total, 23 clinical factors were included in the analysis as candidate predictors of cystitis. Principal component analysis was used to decompose the bladder dose-volume histogram into 8 principal components, explaining more than 95% of the variance. The data set of clinical factors and principal components was divided into training (70%) and test (30%) data sets, with the training data set used by the algorithm to compute an NTCP model. The first step of the algorithm was to obtain a bootstrap sample, followed by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded on each iteration. The most frequent model was selected as the final "automatically generated model" (AGM). The published model and AGM were fitted on the training data sets, and the risk of cystitis was calculated. The 2 models had no significant differences in predictive performance, both for the training and test data sets (P value > .05) and found similar clinical and dosimetric factors as predictors. Both models exhibited good explanatory performance on the training data set (P values > .44), which was reduced on the test data sets (P values < .05). The predictive value of the AGM is equivalent to that of the expert-derived published model. It demonstrates potential in saving time, tackling problems with a large number of parameters, and standardizing variable selection in NTCP modeling. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.

  6. Multiobjective genetic algorithm conjunctive use optimization for production, cost, and energy with dynamic return flow

    NASA Astrophysics Data System (ADS)

    Peralta, Richard C.; Forghani, Ali; Fayad, Hala

    2014-04-01

    Many real water resources optimization problems involve conflicting objectives for which the main goal is to find a set of optimal solutions on, or near to the Pareto front. E-constraint and weighting multiobjective optimization techniques have shortcomings, especially as the number of objectives increases. Multiobjective Genetic Algorithms (MGA) have been previously proposed to overcome these difficulties. Here, an MGA derives a set of optimal solutions for multiobjective multiuser conjunctive use of reservoir, stream, and (un)confined groundwater resources. The proposed methodology is applied to a hydraulically and economically nonlinear system in which all significant flows, including stream-aquifer-reservoir-diversion-return flow interactions, are simulated and optimized simultaneously for multiple periods. Neural networks represent constrained state variables. The addressed objectives that can be optimized simultaneously in the coupled simulation-optimization model are: (1) maximizing water provided from sources, (2) maximizing hydropower production, and (3) minimizing operation costs of transporting water from sources to destinations. Results show the efficiency of multiobjective genetic algorithms for generating Pareto optimal sets for complex nonlinear multiobjective optimization problems.

  7. A FASTQ compressor based on integer-mapped k-mer indexing for biologist.

    PubMed

    Zhang, Yeting; Patel, Khyati; Endrawis, Tony; Bowers, Autumn; Sun, Yazhou

    2016-03-15

    Next generation sequencing (NGS) technologies have gained considerable popularity among biologists. For example, RNA-seq, which provides both genomic and functional information, has been widely used by recent functional and evolutionary studies, especially in non-model organisms. However, storing and transmitting these large data sets (primarily in FASTQ format) have become genuine challenges, especially for biologists with little informatics experience. Data compression is thus a necessity. KIC, a FASTQ compressor based on a new integer-mapped k-mer indexing method, was developed (available at http://www.ysunlab.org/kic.jsp). It offers high compression ratio on sequence data, outstanding user-friendliness with graphic user interfaces, and proven reliability. Evaluated on multiple large RNA-seq data sets from both human and plants, it was found that the compression ratio of KIC had exceeded all major generic compressors, and was comparable to those of the latest dedicated compressors. KIC enables researchers with minimal informatics training to take advantage of the latest sequence compression technologies, easily manage large FASTQ data sets, and reduce storage and transmission cost. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Optimizing Decision Support for Tailored Health Behavior Change Applications.

    PubMed

    Kukafka, Rita; Jeong, In cheol; Finkelstein, Joseph

    2015-01-01

    The Tailored Lifestyle Change Decision Aid (TLC DA) system was designed to provide support for a person to make an informed choice about which behavior change to work on when multiple unhealthy behaviors are present. TLC DA can be delivered via web, smartphones and tablets. The system collects a significant amount of information that is used to generate tailored messages to consumers to persuade them in certain healthy lifestyles. One limitation is the necessity to collect vast amounts of information from users who manually enter. By identifying an optimal set of self-reported parameters we will be able to minimize the data entry burden of the app users. The study was to identify primary determinants of health behavior choices made by patients after using the system. Using discriminant analysis an optimal set of predictors was identified. The resulting set included smoking status, smoking cessation success estimate, self-efficacy, body mass index and diet status. Predicting smoking cessation choice was the most accurate, followed by weight management. Physical activity and diet choices were better identified in a combined cluster.

  9. Simulated annealing with restart strategy for the blood pickup routing problem

    NASA Astrophysics Data System (ADS)

    Yu, V. F.; Iswari, T.; Normasari, N. M. E.; Asih, A. M. S.; Ting, H.

    2018-04-01

    This study develops a simulated annealing heuristic with restart strategy (SA_RS) for solving the blood pickup routing problem (BPRP). BPRP minimizes the total length of the routes for blood bag collection between a blood bank and a set of donation sites, each associated with a time window constraint that must be observed. The proposed SA_RS is implemented in C++ and tested on benchmark instances of the vehicle routing problem with time windows to verify its performance. The algorithm is then tested on some newly generated BPRP instances and the results are compared with those obtained by CPLEX. Experimental results show that the proposed SA_RS heuristic effectively solves BPRP.

  10. A template-based approach for parallel hexahedral two-refinement

    DOE PAGES

    Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.

    2016-10-17

    Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less

  11. A template-based approach for parallel hexahedral two-refinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.

    Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less

  12. Decrease in Ground-Run Distance of Small Airplanes by Applying Electrically-Driven Wheels

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hiroshi; Nishizawa, Akira

    A new takeoff method for small airplanes was proposed. Ground-roll performance of an airplane driven by electrically-powered wheels was experimentally and computationally studied. The experiments verified that the ground-run distance was decreased by half with a combination of the powered driven wheels and propeller without increase of energy consumption during the ground-roll. The computational analysis showed the ground-run distance of the wheel-driven aircraft was independent of the motor power when the motor capability exceeded the friction between tires and ground. Furthermore, the distance was minimized when the angle of attack was set to the value so that the wing generated negative lift.

  13. Generation of double giant pulses in actively Q-switched lasers

    NASA Astrophysics Data System (ADS)

    Korobeynikova, A. P.; Shaikin, I. A.; Shaykin, A. A.; Koryukin, I. V.; Khazanov, E. A.

    2018-04-01

    Generation of a second giant pulse in a longitudinal mode neighbouring to the longitudinal mode possessing minimal losses is theoretically and experimentally studied in actively Q-switched lasers. A mathematical model is suggested for explaining the giant pulse generation in a laser with multiple longitudinal modes. The model makes allowance for not only a standing, but also a running wave for each cavity mode. Results of numerical simulation and data of experiments with a Nd : YLF laser explain the effect of second giant pulse generation in a neighbouring longitudinal mode. After a giant pulse in the mode with minimal losses is generated, the threshold for the neighbouring longitudinal mode is still exceeded due to the effect of burning holes in the population inversion spatial distribution.

  14. The Design-To-Cost Manifold

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1990-01-01

    Design-to-cost is a popular technique for controlling costs. Although qualitative techniques exist for implementing design to cost, quantitative methods are sparse. In the launch vehicle and spacecraft engineering process, the question whether to minimize mass is usually an issue. The lack of quantification in this issue leads to arguments on both sides. This paper presents a mathematical technique which both quantifies the design-to-cost process and the mass/complexity issue. Parametric cost analysis generates and applies mathematical formulas called cost estimating relationships. In their most common forms, they are continuous and differentiable. This property permits the application of the mathematics of differentiable manifolds. Although the terminology sounds formidable, the application of the techniques requires only a knowledge of linear algebra and ordinary differential equations, common subjects in undergraduate scientific and engineering curricula. When the cost c is expressed as a differentiable function of n system metrics, setting the cost c to be a constant generates an n-1 dimensional subspace of the space of system metrics such that any set of metric values in that space satisfies the constant design-to-cost criterion. This space is a differentiable manifold upon which all mathematical properties of a differentiable manifold may be applied. One important property is that an easily implemented system of ordinary differential equations exists which permits optimization of any function of the system metrics, mass for example, over the design-to-cost manifold. A dual set of equations defines the directions of maximum and minimum cost change. A simplified approximation of the PRICE H(TM) production-production cost is used to generate this set of differential equations over [mass, complexity] space. The equations are solved in closed form to obtain the one dimensional design-to-cost trade and design-for-cost spaces. Preliminary results indicate that cost is relatively insensitive to changes in mass and that the reduction of complexity, both in the manufacturing process and of the spacecraft, is dominant in reducing cost.

  15. Minimization and management of wastes from biomedical research.

    PubMed Central

    Rau, E H; Alaimo, R J; Ashbrook, P C; Austin, S M; Borenstein, N; Evans, M R; French, H M; Gilpin, R W; Hughes, J; Hummel, S J; Jacobsohn, A P; Lee, C Y; Merkle, S; Radzinski, T; Sloane, R; Wagner, K D; Weaner, L E

    2000-01-01

    Several committees were established by the National Association of Physicians for the Environment to investigate and report on various topics at the National Leadership Conference on Biomedical Research and the Environment held at the 1--2 November 1999 at the National Institutes of Health in Bethesda, Maryland. This is the report of the Committee on Minimization and Management of Wastes from Biomedical Research. Biomedical research facilities contribute a small fraction of the total amount of wastes generated in the United States, and the rate of generation appears to be decreasing. Significant reductions in generation of hazardous, radioactive, and mixed wastes have recently been reported, even at facilities with rapidly expanding research programs. Changes in the focus of research, improvements in laboratory techniques, and greater emphasis on waste minimization (volume and toxicity reduction) explain the declining trend in generation. The potential for uncontrolled releases of wastes from biomedical research facilities and adverse impacts on the general environment from these wastes appears to be low. Wastes are subject to numerous regulatory requirements and are contained and managed in a manner protective of the environment. Most biohazardous agents, chemicals, and radionuclides that find significant use in research are not likely to be persistent, bioaccumulative, or toxic if they are released. Today, the primary motivations for the ongoing efforts by facilities to improve minimization and management of wastes are regulatory compliance and avoidance of the high disposal costs and liabilities associated with generation of regulated wastes. The committee concluded that there was no evidence suggesting that the anticipated increases in biomedical research will significantly increase generation of hazardous wastes or have adverse impacts on the general environment. This conclusion assumes the positive, countervailing trends of enhanced pollution prevention efforts by facilities and reductions in waste generation resulting from improvements in research methods will continue. PMID:11121362

  16. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF HEATING, VENTILATING, AND AIR CONDITIONING EQUIPMENT

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small- and medium-size manufacturers who want to minimize their generation of hazardous waste but lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at sel...

  17. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF BASEBALL BATS AND GOLF CLUBS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. aste Minimization Assessment Centers (WMACS) were established at selected un...

  18. ENVIRONMENTAL RESEARCH BRIEF: WASTE MINIMIZATION ASSESSMENT FOR A MANUFACTURER OF IRON CASTINGS AND FABRICATED SHEET METAL PARTS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expense to do so. aste Minimization Assessment Centers (WMACS) were established at selected univ...

  19. Strategic Minimization of High Level Waste from Pyroprocessing of Spent Nuclear Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpson, Michael F.; Benedict, Robert W.

    The pyroprocessing of spent nuclear fuel results in two high-level waste streams--ceramic and metal waste. Ceramic waste contains active metal fission product-loaded salt from the electrorefining, while the metal waste contains cladding hulls and undissolved noble metals. While pyroprocessing was successfully demonstrated for treatment of spent fuel from Experimental Breeder Reactor-II in 1999, it was done so without a specific objective to minimize high-level waste generation. The ceramic waste process uses “throw-away” technology that is not optimized with respect to volume of waste generated. In looking past treatment of EBR-II fuel, it is critical to minimize waste generation for technologymore » developed under the Global Nuclear Energy Partnership (GNEP). While the metal waste cannot be readily reduced, there are viable routes towards minimizing the ceramic waste. Fission products that generate high amounts of heat, such as Cs and Sr, can be separated from other active metal fission products and placed into short-term, shallow disposal. The remaining active metal fission products can be concentrated into the ceramic waste form using an ion exchange process. It has been estimated that ion exchange can reduce ceramic high-level waste quantities by as much as a factor of 3 relative to throw-away technology.« less

  20. Approaching the basis set limit for DFT calculations using an environment-adapted minimal basis with perturbation theory: Formulation, proof of concept, and a pilot implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mao, Yuezhi; Horn, Paul R.; Mardirossian, Narbe

    2016-07-28

    Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set producesmore » <0.15 kcal/mol root-mean-square deviations for most of the tested TC datasets, and <0.1 kcal/mol for most of the NC datasets. The performance of density functionals near the basis set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals.« less

  1. Viewpoints of working sandwich generation women and occupational therapists on role balance strategies.

    PubMed

    Evans, Kiah L; Girdler, Sonya J; Falkmer, Torbjorn; Richmond, Janet E; Wagman, Petra; Millsteed, Jeannine; Falkmer, Marita

    2017-09-01

    Occupational therapists need to be cognizant of evidence-based role balance advice and strategies that women with multigenerational caring responsibilities can implement independently or with minimal assistance, as role balance may not be the primary goal during many encounters with this population. Hence, this study aimed to identify the viewpoints on the most helpful role balance strategies for working sandwich generation women, both from their own perspectives and from the perspective of occupational therapists. This was achieved through a Q methodology study, where 54 statements were based on findings from interviews, sandwich generation literature and occupational therapy literature. In total, 31 working sandwich generation women and 42 occupational therapists completed the Q sort through either online or paper administration. The data were analysed using factor analysis with varimax rotation and were interpreted through collaboration with experts in the field. The findings revealed similarities between working sandwich generation women and occupational therapists, particularly in terms of advocating strategies related to sleep, rest and seeking practical assistance from support networks. Differences were also present, with working sandwich generation women viewpoints tending to emphasize strategies related to coping with a busy lifestyle attending to multiple responsibilities. In contrast, occupational therapy viewpoints prioritized strategies related to the occupational therapy process, such as goal setting, activity focused interventions, monitoring progress and facilitating sustainable outcomes.

  2. Influence maximization in complex networks through optimal percolation

    NASA Astrophysics Data System (ADS)

    Morone, Flaviano; Makse, Hernan; CUNY Collaboration; CUNY Collaboration

    The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. Reference: F. Morone, H. A. Makse, Nature 524,65-68 (2015)

  3. A Hamiltonian approach for the Thermodynamics of AdS black holes

    NASA Astrophysics Data System (ADS)

    Baldiotti, M. C.; Fresneda, R.; Molina, C.

    2017-07-01

    In this work we study the Thermodynamics of D-dimensional Schwarzschild-anti de Sitter (SAdS) black holes. The minimal Thermodynamics of the SAdS spacetime is briefly discussed, highlighting some of its strong points and shortcomings. The minimal SAdS Thermodynamics is extended within a Hamiltonian approach, by means of the introduction of an additional degree of freedom. We demonstrate that the cosmological constant can be introduced in the thermodynamic description of the SAdS black hole with a canonical transformation of the Schwarzschild problem, closely related to the introduction of an anti-de Sitter thermodynamic volume. The treatment presented is consistent, in the sense that it is compatible with the introduction of new thermodynamic potentials, and respects the laws of black hole Thermodynamics. By demanding homogeneity of the thermodynamic variables, we are able to construct a new equation of state that completely characterizes the Thermodynamics of SAdS black holes. The treatment naturally generates phenomenological constants that can be associated with different boundary conditions in underlying microscopic theories. A whole new set of phenomena can be expected from the proposed generalization of SAdS Thermodynamics.

  4. Minimal gravity and Frobenius manifolds: bulk correlation on sphere and disk

    NASA Astrophysics Data System (ADS)

    Aleshkin, Konstantin; Belavin, Vladimir; Rim, Chaiho

    2017-11-01

    There are two alternative approaches to the minimal gravity — direct Liouville approach and matrix models. Recently there has been a certain progress in the matrix model approach, growing out of presence of a Frobenius manifold (FM) structure embedded in the theory. The previous studies were mainly focused on the spherical topology. Essentially, it was shown that the action principle of Douglas equation allows to define the free energy and to compute the correlation numbers if the resonance transformations are properly incorporated. The FM structure allows to find the explicit form of the resonance transformation as well as the closed expression for the partition function. In this paper we elaborate on the case of gravitating disk. We focus on the bulk correlators and show that in the similar way as in the closed topology the generating function can be formulated using the set of flat coordinates on the corresponding FM. Moreover, the resonance transformations, which follow from the spherical topology consideration, are exactly those needed to reproduce FZZ result of the Liouville gravity approach.

  5. Coarse-graining errors and numerical optimization using a relative entropy framework

    NASA Astrophysics Data System (ADS)

    Chaimovich, Aviel; Shell, M. Scott

    2011-03-01

    The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, Srel, that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework.

  6. Named Entity Recognition in Chinese Clinical Text Using Deep Neural Network.

    PubMed

    Wu, Yonghui; Jiang, Min; Lei, Jianbo; Xu, Hua

    2015-01-01

    Rapid growth in electronic health records (EHRs) use has led to an unprecedented expansion of available clinical data in electronic formats. However, much of the important healthcare information is locked in the narrative documents. Therefore Natural Language Processing (NLP) technologies, e.g., Named Entity Recognition that identifies boundaries and types of entities, has been extensively studied to unlock important clinical information in free text. In this study, we investigated a novel deep learning method to recognize clinical entities in Chinese clinical documents using the minimal feature engineering approach. We developed a deep neural network (DNN) to generate word embeddings from a large unlabeled corpus through unsupervised learning and another DNN for the NER task. The experiment results showed that the DNN with word embeddings trained from the large unlabeled corpus outperformed the state-of-the-art CRF's model in the minimal feature engineering setting, achieving the highest F1-score of 0.9280. Further analysis showed that word embeddings derived through unsupervised learning from large unlabeled corpus remarkably improved the DNN with randomized embedding, denoting the usefulness of unsupervised feature learning.

  7. Simultaneous Scheduling of Jobs, AGVs and Tools Considering Tool Transfer Times in Multi Machine FMS By SOS Algorithm

    NASA Astrophysics Data System (ADS)

    Sivarami Reddy, N.; Ramamurthy, D. V., Dr.; Prahlada Rao, K., Dr.

    2017-08-01

    This article addresses simultaneous scheduling of machines, AGVs and tools where machines are allowed to share the tools considering transfer times of jobs and tools between machines, to generate best optimal sequences that minimize makespan in a multi-machine Flexible Manufacturing System (FMS). Performance of FMS is expected to improve by effective utilization of its resources, by proper integration and synchronization of their scheduling. Symbiotic Organisms Search (SOS) algorithm is a potent tool which is a better alternative for solving optimization problems like scheduling and proven itself. The proposed SOS algorithm is tested on 22 job sets with makespan as objective for scheduling of machines and tools where machines are allowed to share tools without considering transfer times of jobs and tools and the results are compared with the results of existing methods. The results show that the SOS has outperformed. The same SOS algorithm is used for simultaneous scheduling of machines, AGVs and tools where machines are allowed to share tools considering transfer times of jobs and tools to determine the best optimal sequences that minimize makespan.

  8. Nonlinear radiative heat flux and heat source/sink on entropy generation minimization rate

    NASA Astrophysics Data System (ADS)

    Hayat, T.; Khan, M. Waleed Ahmed; Khan, M. Ijaz; Alsaedi, A.

    2018-06-01

    Entropy generation minimization in nonlinear radiative mixed convective flow towards a variable thicked surface is addressed. Entropy generation for momentum and temperature is carried out. The source for this flow analysis is stretching velocity of sheet. Transformations are used to reduce system of partial differential equations into ordinary ones. Total entropy generation rate is determined. Series solutions for the zeroth and mth order deformation systems are computed. Domain of convergence for obtained solutions is identified. Velocity, temperature and concentration fields are plotted and interpreted. Entropy equation is studied through nonlinear mixed convection and radiative heat flux. Velocity and temperature gradients are discussed through graphs. Meaningful results are concluded in the final remarks.

  9. Waste reduction plan for The Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultz, R.M.

    1990-04-01

    The Oak Ridge National Laboratory (ORNL) is a multipurpose Research and Development (R D) facility. These R D activities generate numerous small waste streams. Waste minimization is defined as any action that minimizes the volume or toxicity of waste by avoiding its generation or recycling. This is accomplished by material substitution, changes to processes, or recycling wastes for reuse. Waste reduction is defined as waste minimization plus treatment which results in volume or toxicity reduction. The ORNL Waste Reduction Program will include both waste minimization and waste reduction efforts. Federal regulations, DOE policies and guidelines, increased costs and liabilities associatedmore » with the management of wastes, limited disposal options and facility capacities, and public consciousness have been motivating factors for implementing comprehensive waste reduction programs. DOE Order 5820.2A, Section 3.c.2.4 requires DOE facilities to establish an auditable waste reduction program for all LLW generators. In addition, it further states that any new facilities, or changes to existing facilities, incorporate waste minimization into design considerations. A more recent DOE Order, 3400.1, Section 4.b, requires the preparation of a waste reduction program plan which must be reviewed annually and updated every three years. Implementation of a waste minimization program for hazardous and radioactive mixed wastes is sited in DOE Order 5400.3, Section 7.d.5. This document has been prepared to address these requirements. 6 refs., 1 fig., 2 tabs.« less

  10. A Method for Integrating Thrust-Vectoring and Actuated Forebody Strakes with Conventional Aerodynamic Controls on a High-Performance Fighter Airplane

    NASA Technical Reports Server (NTRS)

    Lallman, Frederick J.; Davidson, John B.; Murphy, Patrick C.

    1998-01-01

    A method, called pseudo controls, of integrating several airplane controls to achieve cooperative operation is presented. The method eliminates conflicting control motions, minimizes the number of feedback control gains, and reduces the complication of feedback gain schedules. The method is applied to the lateral/directional controls of a modified high-performance airplane. The airplane has a conventional set of aerodynamic controls, an experimental set of thrust-vectoring controls, and an experimental set of actuated forebody strakes. The experimental controls give the airplane additional control power for enhanced stability and maneuvering capabilities while flying over an expanded envelope, especially at high angles of attack. The flight controls are scheduled to generate independent body-axis control moments. These control moments are coordinated to produce stability-axis angular accelerations. Inertial coupling moments are compensated. Thrust-vectoring controls are engaged according to their effectiveness relative to that of the aerodynamic controls. Vane-relief logic removes steady and slowly varying commands from the thrust-vectoring controls to alleviate heating of the thrust turning devices. The actuated forebody strakes are engaged at high angles of attack. This report presents the forward-loop elements of a flight control system that positions the flight controls according to the desired stability-axis accelerations. This report does not include the generation of the required angular acceleration commands by means of pilot controls or the feedback of sensed airplane motions.

  11. A practical model for the train-set utilization: The case of Beijing-Tianjin passenger dedicated line in China

    PubMed Central

    Li, Xiaomeng; Yang, Zhuo

    2017-01-01

    As a sustainable transportation mode, high-speed railway (HSR) has become an efficient way to meet the huge travel demand. However, due to the high acquisition and maintenance cost, it is impossible to build enough infrastructure and purchase enough train-sets. Great efforts are required to improve the transport capability of HSR. The utilization efficiency of train-sets (carrying tools of HSR) is one of the most important factors of the transport capacity of HSR. In order to enhance the utilization efficiency of the train-sets, this paper proposed a train-set circulation optimization model to minimize the total connection time. An innovative two-stage approach which contains segments generation and segments combination was designed to solve this model. In order to verify the feasibility of the proposed approach, an experiment was carried out in the Beijing-Tianjin passenger dedicated line, to fulfill a 174 trips train diagram. The model results showed that compared with the traditional Ant Colony Algorithm (ACA), the utilization efficiency of train-sets can be increased from 43.4% (ACA) to 46.9% (Two-Stage), and 1 train-set can be saved up to fulfill the same transportation tasks. The approach proposed in the study is faster and more stable than the traditional ones, by using which, the HSR staff can draw up the train-sets circulation plan more quickly and the utilization efficiency of the HSR system is also improved. PMID:28489933

  12. USER'S GUIDE: Strategic Waste Minimization Initiative (SWAMI) Version 2.0 - A Software Tool to Aid in Process Analysis for Pollution Prevention

    EPA Science Inventory

    The Strategic WAste Minimization Initiative (SWAMI) Software, Version 2.0 is a tool for using process analysis for identifying waste minimization opportunities within an industrial setting. The software requires user-supplied information for process definition, as well as materia...

  13. Determination of the Core of a Minimal Bacterial Gene Set†

    PubMed Central

    Gil, Rosario; Silva, Francisco J.; Peretó, Juli; Moya, Andrés

    2004-01-01

    The availability of a large number of complete genome sequences raises the question of how many genes are essential for cellular life. Trying to reconstruct the core of the protein-coding gene set for a hypothetical minimal bacterial cell, we have performed a computational comparative analysis of eight bacterial genomes. Six of the analyzed genomes are very small due to a dramatic genome size reduction process, while the other two, corresponding to free-living relatives, are larger. The available data from several systematic experimental approaches to define all the essential genes in some completely sequenced bacterial genomes were also considered, and a reconstruction of a minimal metabolic machinery necessary to sustain life was carried out. The proposed minimal genome contains 206 protein-coding genes with all the genetic information necessary for self-maintenance and reproduction in the presence of a full complement of essential nutrients and in the absence of environmental stress. The main features of such a minimal gene set, as well as the metabolic functions that must be present in the hypothetical minimal cell, are discussed. PMID:15353568

  14. Computational exploration of the chemical structure space of possible reverse tricarboxylic acid cycle constituents.

    PubMed

    Meringer, Markus; Cleaves, H James

    2017-12-13

    The reverse tricarboxylic acid (rTCA) cycle has been explored from various standpoints as an idealized primordial metabolic cycle. Its simplicity and apparent ubiquity in diverse organisms across the tree of life have been used to argue for its antiquity and its optimality. In 2000 it was proposed that chemoinformatics approaches support some of these views. Specifically, defined queries of the Beilstein database showed that the molecules of the rTCA are heavily represented in such compound databases. We explore here the chemical structure "space," e.g. the set of organic compounds which possesses some minimal set of defining characteristics, of the rTCA cycle's intermediates using an exhaustive structure generation method. The rTCA's chemical space as defined by the original criteria and explored by our method is some six to seven times larger than originally considered. Acknowledging that each assumption in what is a defining criterion making the rTCA cycle special limits possible generative outcomes, there are many unrealized compounds which fulfill these criteria. That these compounds are unrealized could be due to evolutionary frozen accidents or optimization, though this optimization may also be for systems-level reasons, e.g., the way the pathway and its elements interface with other aspects of metabolism.

  15. Rapid parameterization of small molecules using the Force Field Toolkit.

    PubMed

    Mayne, Christopher G; Saam, Jan; Schulten, Klaus; Tajkhorshid, Emad; Gumbart, James C

    2013-12-15

    The inability to rapidly generate accurate and robust parameters for novel chemical matter continues to severely limit the application of molecular dynamics simulations to many biological systems of interest, especially in fields such as drug discovery. Although the release of generalized versions of common classical force fields, for example, General Amber Force Field and CHARMM General Force Field, have posited guidelines for parameterization of small molecules, many technical challenges remain that have hampered their wide-scale extension. The Force Field Toolkit (ffTK), described herein, minimizes common barriers to ligand parameterization through algorithm and method development, automation of tedious and error-prone tasks, and graphical user interface design. Distributed as a VMD plugin, ffTK facilitates the traversal of a clear and organized workflow resulting in a complete set of CHARMM-compatible parameters. A variety of tools are provided to generate quantum mechanical target data, setup multidimensional optimization routines, and analyze parameter performance. Parameters developed for a small test set of molecules using ffTK were comparable to existing CGenFF parameters in their ability to reproduce experimentally measured values for pure-solvent properties (<15% error from experiment) and free energy of solvation (±0.5 kcal/mol from experiment). Copyright © 2013 Wiley Periodicals, Inc.

  16. Temperature changes accompanying near infrared diode laser endodontic treatment of wet canals.

    PubMed

    Hmud, Raghad; Kahler, William A; Walsh, Laurence J

    2010-05-01

    Diode laser endodontic treatments such as disinfection or the generation of cavitations should not cause deleterious thermal changes in radicular dentin. This study assessed thermal changes in the root canal and on the root surface when using 940 and 980 nm lasers at settings of 4 W/10 Hz and 2.5 W/25 Hz, respectively, delivered into 2000-mum fibers to generate cavitations in water. The root surface temperature in the apical third was recorded, as was the water temperature in coronal, middle, and apical third regions, by using thermocouples placed inside the canal. Lasing was undertaken with either rest periods or rinsing between 5-second laser exposures. Both diode lasers induced only modest temperature changes on the external root surface at the settings used. Even though the temperature of the water within the canal increased during lasing by as much as 30 degrees C, the external root surface temperature increased by only a maximum of 4 degrees C. Irrigation between laser exposures was highly effective in minimizing thermal changes within the root canal and on the root surface. Diode laser parameters that induce cavitation do not result in adverse thermal changes in radicular dentin. Copyright (c) 2010 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  17. Free time minimizers for the three-body problem

    NASA Astrophysics Data System (ADS)

    Moeckel, Richard; Montgomery, Richard; Sánchez Morgado, Héctor

    2018-03-01

    Free time minimizers of the action (called "semi-static" solutions by Mañe in International congress on dynamical systems in Montevideo (a tribute to Ricardo Mañé), vol 362, pp 120-131, 1996) play a central role in the theory of weak KAM solutions to the Hamilton-Jacobi equation (Fathi in Weak KAM Theorem in Lagrangian Dynamics Preliminary Version Number 10, 2017). We prove that any solution to Newton's three-body problem which is asymptotic to Lagrange's parabolic homothetic solution is eventually a free time minimizer. Conversely, we prove that every free time minimizer tends to Lagrange's solution, provided the mass ratios lie in a certain large open set of mass ratios. We were inspired by the work of Da Luz and Maderna (Math Proc Camb Philos Soc 156:209-227, 1980) which showed that every free time minimizer for the N-body problem is parabolic and therefore must be asymptotic to the set of central configurations. We exclude being asymptotic to Euler's central configurations by a second variation argument. Central configurations correspond to rest points for the McGehee blown-up dynamics. The large open set of mass ratios are those for which the linearized dynamics at each Euler rest point has a complex eigenvalue.

  18. An algorithm for designing minimal microbial communities with desired metabolic capacities

    PubMed Central

    Eng, Alexander; Borenstein, Elhanan

    2016-01-01

    Motivation: Recent efforts to manipulate various microbial communities, such as fecal microbiota transplant and bioreactor systems’ optimization, suggest a promising route for microbial community engineering with numerous medical, environmental and industrial applications. However, such applications are currently restricted in scale and often rely on mimicking or enhancing natural communities, calling for the development of tools for designing synthetic communities with specific, tailored, desired metabolic capacities. Results: Here, we present a first step toward this goal, introducing a novel algorithm for identifying minimal sets of microbial species that collectively provide the enzymatic capacity required to synthesize a set of desired target product metabolites from a predefined set of available substrates. Our method integrates a graph theoretic representation of network flow with the set cover problem in an integer linear programming (ILP) framework to simultaneously identify possible metabolic paths from substrates to products while minimizing the number of species required to catalyze these metabolic reactions. We apply our algorithm to successfully identify minimal communities both in a set of simple toy problems and in more complex, realistic settings, and to investigate metabolic capacities in the gut microbiome. Our framework adds to the growing toolset for supporting informed microbial community engineering and for ultimately realizing the full potential of such engineering efforts. Availability and implementation: The algorithm source code, compilation, usage instructions and examples are available under a non-commercial research use only license at https://github.com/borenstein-lab/CoMiDA. Contact: elbo@uw.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153571

  19. Waste minimization charges up recycling of spent lead-acid batteries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Queneau, P.B.; Troutman, A.L.

    Substantial strides are being made to minimize waste generated form spent lead-acid battery recycling. The Center for Hazardous Materials Research (Pittsburgh) recently investigated the potential for secondary lead smelters to recover lead from battery cases and other materials found at hazardous waste sites. Primary and secondary lead smelters in the U.S. and Canada are processing substantial tons of lead wastes, and meeting regulatory safeguards. Typical lead wastes include contaminated soil, dross and dust by-products from industrial lead consumers, tetraethyl lead residues, chemical manufacturing by-products, leaded glass, china clay waste, munitions residues and pigments. The secondary lead industry also is developingmore » and installing systems to convert process inputs to products with minimum generation of liquid, solid and gaseous wastes. The industry recently has made substantial accomplishments that minimize waste generation during lead production from its bread and butter feedstock--spent lead-acid batteries.« less

  20. Improving the performance of minimizers and winnowing schemes

    PubMed Central

    Marçais, Guillaume; Pellow, David; Bork, Daniel; Orenstein, Yaron; Shamir, Ron; Kingsford, Carl

    2017-01-01

    Abstract Motivation: The minimizers scheme is a method for selecting k-mers from sequences. It is used in many bioinformatics software tools to bin comparable sequences or to sample a sequence in a deterministic fashion at approximately regular intervals, in order to reduce memory consumption and processing time. Although very useful, the minimizers selection procedure has undesirable behaviors (e.g. too many k-mers are selected when processing certain sequences). Some of these problems were already known to the authors of the minimizers technique, and the natural lexicographic ordering of k-mers used by minimizers was recognized as their origin. Many software tools using minimizers employ ad hoc variations of the lexicographic order to alleviate those issues. Results: We provide an in-depth analysis of the effect of k-mer ordering on the performance of the minimizers technique. By using small universal hitting sets (a recently defined concept), we show how to significantly improve the performance of minimizers and avoid some of its worse behaviors. Based on these results, we encourage bioinformatics software developers to use an ordering based on a universal hitting set or, if not possible, a randomized ordering, rather than the lexicographic order. This analysis also settles negatively a conjecture (by Schleimer et al.) on the expected density of minimizers in a random sequence. Availability and Implementation: The software used for this analysis is available on GitHub: https://github.com/gmarcais/minimizers.git. Contact: gmarcais@cs.cmu.edu or carlk@cs.cmu.edu PMID:28881970

  1. Utility of antioxidants during assisted reproductive techniques: an evidence based review.

    PubMed

    Agarwal, Ashok; Durairajanayagam, Damayanthi; du Plessis, Stefan S

    2014-11-24

    Assisted reproductive technology (ART) is a common treatment of choice for many couples facing infertility issues, be it due to male or female factor, or idiopathic. Employment of ART techniques, however, come with its own challenges as the in vitro environment is not nearly as ideal as the in vivo environment, where reactive oxygen species (ROS) build-up leading to oxidative stress is kept in check by the endogenous antioxidants system. While physiological amounts of ROS are necessary for normal reproductive function in vivo, in vitro manipulation of gametes and embryos exposes these cells to excessive ROS production either by endogenous or exogenous environmental factors. In this review, we discuss the sources of ROS in an in vitro clinical setting and the influence of oxidative stress on gamete/embryo quality and the outcome of IVF/ICSI. Sources of ROS and different strategies of overcoming the excessive generation of ROS in vitro are also highlighted. Endogenously, the gametes and the developing embryo become sources of ROS. Multiple exogenous factors act as potential sources of ROS, including exposure to visible light, composition of culture media, pH and temperature, oxygen concentration, centrifugation during spermatozoa preparation, ART technique involving handling of gamete/embryo and cryopreservation technique (freeze/thawing process). Finally, the use of antioxidants as agents to minimize ROS generation in the in vitro environment and as oral therapy is highlighted. Both enzymatic and non-enzymatic antioxidants are discussed and the outcome of studies using these antioxidants as oral therapy in the male or female or its use in vitro in media is presented. While results of studies using certain antioxidant agents are promising, the current body of evidence as a whole suggests the need for further well-designed and larger scale randomized controlled studies, as well as research to minimize oxidative stress conditions in the clinical ART setting.

  2. A phylogenomic approach to bacterial subspecies classification: proof of concept in Mycobacterium abscessus.

    PubMed

    Tan, Joon Liang; Khang, Tsung Fei; Ngeow, Yun Fong; Choo, Siew Woh

    2013-12-13

    Mycobacterium abscessus is a rapidly growing mycobacterium that is often associated with human infections. The taxonomy of this species has undergone several revisions and is still being debated. In this study, we sequenced the genomes of 12 M. abscessus strains and used phylogenomic analysis to perform subspecies classification. A data mining approach was used to rank and select informative genes based on the relative entropy metric for the construction of a phylogenetic tree. The resulting tree topology was similar to that generated using the concatenation of five classical housekeeping genes: rpoB, hsp65, secA, recA and sodA. Additional support for the reliability of the subspecies classification came from the analysis of erm41 and ITS gene sequences, single nucleotide polymorphisms (SNPs)-based classification and strain clustering demonstrated by a variable number tandem repeat (VNTR) assay and a multilocus sequence analysis (MLSA). We subsequently found that the concatenation of a minimal set of three median-ranked genes: DNA polymerase III subunit alpha (polC), 4-hydroxy-2-ketovalerate aldolase (Hoa) and cell division protein FtsZ (ftsZ), is sufficient to recover the same tree topology. PCR assays designed specifically for these genes showed that all three genes could be amplified in the reference strain of M. abscessus ATCC 19977T. This study provides proof of concept that whole-genome sequence-based data mining approach can provide confirmatory evidence of the phylogenetic informativeness of existing markers, as well as lead to the discovery of a more economical and informative set of markers that produces similar subspecies classification in M. abscessus. The systematic procedure used in this study to choose the informative minimal set of gene markers can potentially be applied to species or subspecies classification of other bacteria.

  3. Computing global minimizers to a constrained B-spline image registration problem from optimal l1 perturbations to block match data

    PubMed Central

    Castillo, Edward; Castillo, Richard; Fuentes, David; Guerrero, Thomas

    2014-01-01

    Purpose: Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. Methods: The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimal l1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. Results: The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download at www.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. Conclusions: The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match generated estimates. Thus, the framework allows for a wide range of image similarity block match metric and physical modeling combinations. PMID:24694135

  4. Approximate error conjugation gradient minimization methods

    DOEpatents

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  5. Numerical design and optimization of hydraulic resistance and wall shear stress inside pressure-driven microfluidic networks.

    PubMed

    Damiri, Hazem Salim; Bardaweel, Hamzeh Khalid

    2015-11-07

    Microfluidic networks represent the milestone of microfluidic devices. Recent advancements in microfluidic technologies mandate complex designs where both hydraulic resistance and pressure drop across the microfluidic network are minimized, while wall shear stress is precisely mapped throughout the network. In this work, a combination of theoretical and modeling techniques is used to construct a microfluidic network that operates under minimum hydraulic resistance and minimum pressure drop while constraining wall shear stress throughout the network. The results show that in order to minimize the hydraulic resistance and pressure drop throughout the network while maintaining constant wall shear stress throughout the network, geometric and shape conditions related to the compactness and aspect ratio of the parent and daughter branches must be followed. Also, results suggest that while a "local" minimum hydraulic resistance can be achieved for a geometry with an arbitrary aspect ratio, a "global" minimum hydraulic resistance occurs only when the aspect ratio of that geometry is set to unity. Thus, it is concluded that square and equilateral triangular cross-sectional area microfluidic networks have the least resistance compared to all rectangular and isosceles triangular cross-sectional microfluidic networks, respectively. Precise control over wall shear stress through the bifurcations of the microfluidic network is demonstrated in this work. Three multi-generation microfluidic network designs are considered. In these three designs, wall shear stress in the microfluidic network is successfully kept constant, increased in the daughter-branch direction, or decreased in the daughter-branch direction, respectively. For the multi-generation microfluidic network with constant wall shear stress, the design guidelines presented in this work result in identical profiles of wall shear stresses not only within a single generation but also through all the generations of the microfluidic network under investigation. The results obtained in this work are consistent with previously reported data and suitable for a wide range of lab-on-chip applications.

  6. Radiofrequency energy antenna coupling to common laparoscopic instruments: practical implications.

    PubMed

    Jones, Edward L; Robinson, Thomas N; McHenry, Jennifer R; Dunn, Christina L; Montero, Paul N; Govekar, Henry R; Stiegmann, Greg V

    2012-11-01

    Electromagnetic coupling can occur between the monopolar "Bovie" instrument and other laparoscopic instruments without direct contact by a phenomenon termed antenna coupling. The purpose of this study was to determine if, and to what extent, radiofrequency energy couples to other common laparoscopic instruments and to describe practical steps that can minimize the magnitude of antenna coupling. In a laparoscopic simulator, monopolar radiofrequency energy was delivered to an L-hook. The tips of standard, nonelectrical laparoscopic instruments (either an unlit 10 mm telescope or a 5 mm grasper) were placed adjacent to bovine liver tissue and were never in contact with the active electrode. Thermal imaging quantified the change in tissue temperature nearest the tip of the telescope or grasper at the end of a 5 s activation of the active electrode. A 5 s activation (30 watts, coagulation mode, 4 cm separation between instruments) increased tissue temperature compared with baseline adjacent to the grasper tip (2.2 ± 2.2 °C; p = 0.013) and telescope tip (38.2 ± 8.0 °C; p < 0.001). The laparoscopic telescope tip increased tissue temperature more than the laparoscopic grasper tip (p < 0.001). Lowering the generator power from 30 to 15 Watts decreased the heat generated at the telescope tip (38.2 ± 8.0 vs. 13.5 ± 7.5 °C; p < 0.001). Complete separation of the camera/light cords and the active electrode cord decreased the heat generated near the telescope tip compared with parallel bundling of the cords (38.2 ± 8.0 vs. 15.7 ± 11.6 °C; p < 0.001). Commonly used laparoscopic instruments couple monopolar radiofrequency energy without direct contact with the active electrode, a phenomenon that results in heat transfer from a nonelectrically active instrument tip to adjacent tissue. Practical steps to minimize heat transfer resulting from antenna coupling include reducing the monopolar generator power setting and avoiding of parallel bundling of the telescope and active electrode cords.

  7. Arctic Digital Elevation Models (DEMs) generated by Surface Extraction from TIN-Based Searchspace Minimization (SETSM) algorithm from RPCs-based Imagery

    NASA Astrophysics Data System (ADS)

    Noh, M. J.; Howat, I. M.; Porter, C. C.; Willis, M. J.; Morin, P. J.

    2016-12-01

    The Arctic is undergoing rapid change associated with climate warming. Digital Elevation Models (DEMs) provide critical information for change measurement and infrastructure planning in this vulnerable region, yet the existing quality and coverage of DEMs in the Arctic is poor. Low contrast and repeatedly-textured surfaces, such as snow and glacial ice and mountain shadows, all common in the Arctic, challenge existing stereo-photogrammetric techniques. Submeter resolution, stereoscopic satellite imagery with high geometric and radiometric quality, and wide spatial coverage are becoming increasingly accessible to the scientific community. To utilize these imagery for extracting DEMs at a large scale over glaciated and high latitude regions we developed the Surface Extraction from TIN-based Searchspace Minimization (SETSM) algorithm. SETSM is fully automatic (i.e. no search parameter settings are needed) and uses only the satellite rational polynomial coefficients (RPCs). Using SETSM, we have generated a large number of DEMs (> 100,000 scene pair) from WorldView, GeoEye and QuickBird stereo images collected by DigitalGlobe Inc. and archived by the Polar Geospatial Center (PGC) at the University of Minnesota through an academic licensing program maintained by the US National Geospatial-Intelligence Agency (NGA). SETSM is the primary DEM generation software for the US National Science Foundation's ArcticDEM program, with the objective of generating high resolution (2-8m) topography for the entire Arctic landmass, including seamless DEM mosaics and repeat DEM strips for change detection. ArcticDEM is collaboration between multiple US universities, governmental agencies and private companies, as well as international partners assisting with quality control and registration. ArcticDEM is being produced using the petascale Blue Waters supercomputer at the National Center for Supercomputer Applications at the University of Illinois. In this paper, we introduce the SETSM algorithm and the processing system used for the ArcticDEM project, as well as provide notable examples of ArcticDEM products.

  8. a Framework for AN Automatic Seamline Engine

    NASA Astrophysics Data System (ADS)

    Al-Durgham, M.; Downey, M.; Gehrke, S.; Beshah, B. T.

    2016-06-01

    Seamline generation is a crucial last step in the ortho-image mosaicking process. In particular, it is required to convolute residual geometric and radiometric imperfections that stem from various sources. In particular, temporal differences in the acquired data will cause the scene content and illumination conditions to vary. These variations can be modelled successfully. However, one is left with micro-differences that do need to be considered in seamline generation. Another cause of discrepancies originates from the rectification surface as it will not model the actual terrain and especially human-made objects perfectly. Quality of the image orientation will also contribute to the overall differences between adjacent ortho-rectified images. Our approach takes into consideration the aforementioned differences in designing a seamline engine. We have identified the following essential behaviours of the seamline in our engine: 1) Seamlines must pass through the path of least resistance, i.e., overlap areas with low radiometric differences. 2) Seamlines must not intersect with breaklines as that will lead to visible geometric artefacts. And finally, 3), shorter seamlines are generally favourable; they also result in faster operator review and, where necessary, interactive editing cycles. The engine design also permits alteration of the above rules for special cases. Although our preliminary experiments are geared towards line imaging systems (i.e., the Leica ADS family), our seamline engine remains sensor agnostic. Hence, our design is capable of mosaicking images from various sources with minimal effort. The main idea behind this engine is using graph cuts which, in spirit, is based of the max-flow min-cut theory. The main advantage of using graph cuts theory is that the generated solution is global in the energy minimization sense. In addition, graph cuts allows for a highly scalable design where a set of rules contribute towards a cost function which, in turn, influences the path of minimum resistance for the seamlines. In this paper, the authors present an approach for achieving quality seamlines relatively quickly and with emphasis on generating truly seamless ortho-mosaics.

  9. The Costs of Legislated Minimal Competency Requirements. A background paper prepared for the Minimal Cometency Workshops sponsored by the Education Commission of the States and the National Institute of Education.

    ERIC Educational Resources Information Center

    Anderson, Barry D.

    Little is known about the costs of setting up and implementing legislated minimal competency testing (MCT). To estimate the financial obstacles which lie between the idea and its implementation, MCT requirements are viewed from two perspectives. The first, government regulation, views legislated minimal competency requirements as an attempt by the…

  10. Use of Binary Partition Tree and energy minimization for object-based classification of urban land cover

    NASA Astrophysics Data System (ADS)

    Li, Mengmeng; Bijker, Wietske; Stein, Alfred

    2015-04-01

    Two main challenges are faced when classifying urban land cover from very high resolution satellite images: obtaining an optimal image segmentation and distinguishing buildings from other man-made objects. For optimal segmentation, this work proposes a hierarchical representation of an image by means of a Binary Partition Tree (BPT) and an unsupervised evaluation of image segmentations by energy minimization. For building extraction, we apply fuzzy sets to create a fuzzy landscape of shadows which in turn involves a two-step procedure. The first step is a preliminarily image classification at a fine segmentation level to generate vegetation and shadow information. The second step models the directional relationship between building and shadow objects to extract building information at the optimal segmentation level. We conducted the experiments on two datasets of Pléiades images from Wuhan City, China. To demonstrate its performance, the proposed classification is compared at the optimal segmentation level with Maximum Likelihood Classification and Support Vector Machine classification. The results show that the proposed classification produced the highest overall accuracies and kappa coefficients, and the smallest over-classification and under-classification geometric errors. We conclude first that integrating BPT with energy minimization offers an effective means for image segmentation. Second, we conclude that the directional relationship between building and shadow objects represented by a fuzzy landscape is important for building extraction.

  11. Artificial neural networks as alternative tool for minimizing error predictions in manufacturing ultradeformable nanoliposome formulations.

    PubMed

    León Blanco, José M; González-R, Pedro L; Arroyo García, Carmen Martina; Cózar-Bernal, María José; Calle Suárez, Marcos; Canca Ortiz, David; Rabasco Álvarez, Antonio María; González Rodríguez, María Luisa

    2018-01-01

    This work was aimed at determining the feasibility of artificial neural networks (ANN) by implementing backpropagation algorithms with default settings to generate better predictive models than multiple linear regression (MLR) analysis. The study was hypothesized on timolol-loaded liposomes. As tutorial data for ANN, causal factors were used, which were fed into the computer program. The number of training cycles has been identified in order to optimize the performance of the ANN. The optimization was performed by minimizing the error between the predicted and real response values in the training step. The results showed that training was stopped at 10 000 training cycles with 80% of the pattern values, because at this point the ANN generalizes better. Minimum validation error was achieved at 12 hidden neurons in a single layer. MLR has great prediction ability, with errors between predicted and real values lower than 1% in some of the parameters evaluated. Thus, the performance of this model was compared to that of the MLR using a factorial design. Optimal formulations were identified by minimizing the distance among measured and theoretical parameters, by estimating the prediction errors. Results indicate that the ANN shows much better predictive ability than the MLR model. These findings demonstrate the increased efficiency of the combination of ANN and design of experiments, compared to the conventional MLR modeling techniques.

  12. Of Minima and Maxima: The Social Significance of Minimal Competency Testing and the Search for Educational Excellence.

    ERIC Educational Resources Information Center

    Ericson, David P.

    1984-01-01

    Explores the many meanings of the minimal competency testing movement and the more recent mobilization for educational excellence in the schools. Argues that increasing the value of the diploma by setting performance standards on minimal competency tests and by elevating academic graduation standards may strongly conflict with policies encouraging…

  13. Rifaximin Exerts Beneficial Effects Independent of its Ability to Alter Microbiota Composition.

    PubMed

    Kang, Dae J; Kakiyama, Genta; Betrapally, Naga S; Herzog, Jeremy; Nittono, Hiroshi; Hylemon, Phillip B; Zhou, Huiping; Carroll, Ian; Yang, Jing; Gillevet, Patrick M; Jiao, Chunhua; Takei, Hajime; Pandak, William M; Iida, Takashi; Heuman, Douglas M; Fan, Sili; Fiehn, Oliver; Kurosawa, Takao; Sikaroodi, Masoumeh; Sartor, R B; Bajaj, Jasmohan S

    2016-08-25

    Rifaximin has clinical benefits in minimal hepatic encephalopathy (MHE) but the mechanism of action is unclear. The antibiotic-dependent and -independent effects of rifaximin need to be elucidated in the setting of MHE-associated microbiota. To assess the action of rifaximin on intestinal barrier, inflammatory milieu and ammonia generation independent of microbiota using rifaximin. Four germ-free (GF) mice groups were used (1) GF, (2) GF+rifaximin, (3) Humanized with stools from an MHE patient, and (4) Humanized+rifaximin. Mice were followed for 30 days while rifaximin was administered in chow at 100 mg/kg from days 16-30. We tested for ammonia generation (small-intestinal glutaminase, serum ammonia, and cecal glutamine/amino-acid moieties), systemic inflammation (serum IL-1β, IL-6), intestinal barrier (FITC-dextran, large-/small-intestinal expression of IL-1β, IL-6, MCP-1, e-cadherin and zonulin) along with microbiota composition (colonic and fecal multi-tagged sequencing) and function (endotoxemia, fecal bile acid deconjugation and de-hydroxylation). All mice survived until day 30. In the GF setting, rifaximin decreased intestinal ammonia generation (lower serum ammonia, increased small-intestinal glutaminase, and cecal glutamine content) without changing inflammation or intestinal barrier function. Humanized microbiota increased systemic/intestinal inflammation and endotoxemia without hyperammonemia. Rifaximin therapy significantly ameliorated these inflammatory cytokines. Rifaximin also favorably impacted microbiota function (reduced endotoxin and decreased deconjugation and formation of potentially toxic secondary bile acids), but not microbial composition in humanized mice. Rifaximin beneficially alters intestinal ammonia generation by regulating intestinal glutaminase expression independent of gut microbiota. MHE-associated fecal colonization results in intestinal and systemic inflammation in GF mice, which is also ameliorated with rifaximin.

  14. Maximize, minimize or target - optimization for a fitted response from a designed experiment

    DOE PAGES

    Anderson-Cook, Christine Michaela; Cao, Yongtao; Lu, Lu

    2016-04-01

    One of the common goals of running and analyzing a designed experiment is to find a location in the design space that optimizes the response of interest. Depending on the goal of the experiment, we may seek to maximize or minimize the response, or set the process to hit a particular target value. After the designed experiment, a response model is fitted and the optimal settings of the input factors are obtained based on the estimated response model. Furthermore, the suggested optimal settings of the input factors are then used in the production environment.

  15. Entropy Generation Minimization in Dimethyl Ether Synthesis: A Case Study

    NASA Astrophysics Data System (ADS)

    Kingston, Diego; Razzitte, Adrián César

    2018-04-01

    Entropy generation minimization is a method that helps improve the efficiency of real processes and devices. In this article, we study the entropy production (due to chemical reactions, heat exchange and friction) in a conventional reactor that synthesizes dimethyl ether and minimize it by modifying different operating variables of the reactor, such as composition, temperature and pressure, while aiming at a fixed production of dimethyl ether. Our results indicate that it is possible to reduce the entropy production rate by nearly 70 % and that, by changing only the inlet composition, it is possible to cut it by nearly 40 %, though this comes at the expense of greater dissipation due to heat transfer. We also study the alternative of coupling the reactor with another, where dehydrogenation of methylcyclohexane takes place. In that case, entropy generation can be reduced by 54 %, when pressure, temperature and inlet molar flows are varied. These examples show that entropy generation analysis can be a valuable tool in engineering design and applications aiming at process intensification and efficient operation of plant equipment.

  16. Solar electricity supply isolines of generation capacity and storage.

    PubMed

    Grossmann, Wolf; Grossmann, Iris; Steininger, Karl W

    2015-03-24

    The recent sharp drop in the cost of photovoltaic (PV) electricity generation accompanied by globally rapidly increasing investment in PV plants calls for new planning and management tools for large-scale distributed solar networks. Of major importance are methods to overcome intermittency of solar electricity, i.e., to provide dispatchable electricity at minimal costs. We find that pairs of electricity generation capacity G and storage S that give dispatchable electricity and are minimal with respect to S for a given G exhibit a smooth relationship of mutual substitutability between G and S. These isolines between G and S support the solving of several tasks, including the optimal sizing of generation capacity and storage, optimal siting of solar parks, optimal connections of solar parks across time zones for minimizing intermittency, and management of storage in situations of far below average insolation to provide dispatchable electricity. G-S isolines allow determining the cost-optimal pair (G,S) as a function of the cost ratio of G and S. G-S isolines provide a method for evaluating the effect of geographic spread and time zone coverage on costs of solar electricity.

  17. Solar electricity supply isolines of generation capacity and storage

    PubMed Central

    Grossmann, Wolf; Grossmann, Iris; Steininger, Karl W.

    2015-01-01

    The recent sharp drop in the cost of photovoltaic (PV) electricity generation accompanied by globally rapidly increasing investment in PV plants calls for new planning and management tools for large-scale distributed solar networks. Of major importance are methods to overcome intermittency of solar electricity, i.e., to provide dispatchable electricity at minimal costs. We find that pairs of electricity generation capacity G and storage S that give dispatchable electricity and are minimal with respect to S for a given G exhibit a smooth relationship of mutual substitutability between G and S. These isolines between G and S support the solving of several tasks, including the optimal sizing of generation capacity and storage, optimal siting of solar parks, optimal connections of solar parks across time zones for minimizing intermittency, and management of storage in situations of far below average insolation to provide dispatchable electricity. G−S isolines allow determining the cost-optimal pair (G,S) as a function of the cost ratio of G and S. G−S isolines provide a method for evaluating the effect of geographic spread and time zone coverage on costs of solar electricity. PMID:25755261

  18. Towards the optimal design of an uncemented acetabular component using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Ghosh, Rajesh; Pratihar, Dilip Kumar; Gupta, Sanjay

    2015-12-01

    Aseptic loosening of the acetabular component (hemispherical socket of the pelvic bone) has been mainly attributed to bone resorption and excessive generation of wear particle debris. The aim of this study was to determine optimal design parameters for the acetabular component that would minimize bone resorption and volumetric wear. Three-dimensional finite element models of intact and implanted pelvises were developed using data from computed tomography scans. A multi-objective optimization problem was formulated and solved using a genetic algorithm. A combination of suitable implant material and corresponding set of optimal thicknesses of the component was obtained from the Pareto-optimal front of solutions. The ultra-high-molecular-weight polyethylene (UHMWPE) component generated considerably greater volumetric wear but lower bone density loss compared to carbon-fibre reinforced polyetheretherketone (CFR-PEEK) and ceramic. CFR-PEEK was located in the range between ceramic and UHMWPE. Although ceramic appeared to be a viable alternative to cobalt-chromium-molybdenum alloy, CFR-PEEK seems to be the most promising alternative material.

  19. Simplified Design Equations for Class-E Neural Prosthesis Transmitters

    PubMed Central

    Troyk, Philip; Hu, Zhe

    2013-01-01

    Extreme miniaturization of implantable electronic devices is recognized as essential for the next generation of neural prostheses, owing to the need for minimizing the damage and disruption of the surrounding neural tissue. Transcutaneous power and data transmission via a magnetic link remains the most effective means of powering and controlling implanted neural prostheses. Reduction in the size of the coil, within the neural prosthesis, demands the generation of a high-intensity radio frequency magnetic field from the extracoporeal transmitter. The Class-E power amplifier circuit topology has been recognized as a highly effective means of producing large radio frequency currents within the transmitter coil. Unfortunately, design of a Class-E circuit is most often fraught by the need to solve a complex set of equations so as to implement both the zero-voltage-switching and zero-voltage-derivative-switching conditions that are required for efficient operation. This paper presents simple explicit design equations for designing the Class-E circuit topology. Numerical design examples are presented to illustrate the design procedure. PMID:23292784

  20. Enhanced FIB-SEM systems for large-volume 3D imaging

    PubMed Central

    Xu, C Shan; Hayworth, Kenneth J; Lu, Zhiyuan; Grob, Patricia; Hassan, Ahmed M; García-Cerdán, José G; Niyogi, Krishna K; Nogales, Eva; Weinberg, Richard J; Hess, Harald F

    2017-01-01

    Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 106 µm3. These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronal processes and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology. DOI: http://dx.doi.org/10.7554/eLife.25916.001 PMID:28500755

  1. A Genetic Algorithm Approach to Motion Sensor Placement in Smart Environments.

    PubMed

    Thomas, Brian L; Crandall, Aaron S; Cook, Diane J

    2016-04-01

    Smart environments and ubiquitous computing technologies hold great promise for a wide range of real world applications. The medical community is particularly interested in high quality measurement of activities of daily living. With accurate computer modeling of older adults, decision support tools may be built to assist care providers. One aspect of effectively deploying these technologies is determining where the sensors should be placed in the home to effectively support these end goals. This work introduces and evaluates a set of approaches for generating sensor layouts in the home. These approaches range from the gold standard of human intuition-based placement to more advanced search algorithms, including Hill Climbing and Genetic Algorithms. The generated layouts are evaluated based on their ability to detect activities while minimizing the number of needed sensors. Sensor-rich environments can provide valuable insights about adults as they go about their lives. These sensors, once in place, provide information on daily behavior that can facilitate an aging-in-place approach to health care.

  2. A Genetic Algorithm Approach to Motion Sensor Placement in Smart Environments

    PubMed Central

    Thomas, Brian L.; Crandall, Aaron S.; Cook, Diane J.

    2016-01-01

    Smart environments and ubiquitous computing technologies hold great promise for a wide range of real world applications. The medical community is particularly interested in high quality measurement of activities of daily living. With accurate computer modeling of older adults, decision support tools may be built to assist care providers. One aspect of effectively deploying these technologies is determining where the sensors should be placed in the home to effectively support these end goals. This work introduces and evaluates a set of approaches for generating sensor layouts in the home. These approaches range from the gold standard of human intuition-based placement to more advanced search algorithms, including Hill Climbing and Genetic Algorithms. The generated layouts are evaluated based on their ability to detect activities while minimizing the number of needed sensors. Sensor-rich environments can provide valuable insights about adults as they go about their lives. These sensors, once in place, provide information on daily behavior that can facilitate an aging-in-place approach to health care. PMID:27453810

  3. Enhanced FIB-SEM systems for large-volume 3D imaging

    DOE PAGES

    Xu, C. Shan; Hayworth, Kenneth J.; Lu, Zhiyuan; ...

    2017-05-13

    Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 10 6 ?m 3 . These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronal processesmore » and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology.« less

  4. Leadless cardiac pacemakers: present and the future.

    PubMed

    Chew, Derek S; Kuriachan, Vikas

    2018-01-01

    Pacing technology for many decades has been composed of a generator attached to leads that are usually transvenous. Recently, leadless pacemakers have been studied in clinical settings and now available for use in many countries. This includes the single-component Nanostim Leadless Cardiac Pacemaker and Micra Transcatheter Pacing System, as well as the multicomponent Wireless Stimulation Endocardial system. Clinical studies in single-component leadless pacing technology has shown that they can be successfully implanted with minimal complications. The follow-up studies also seem to confirm the findings from the initial clinical trials. These systems offer some advantages over a traditional pacing system comprised of a subcutaneous generator and transvenous leads. In many ways, these leadless systems are disruptive technologies that are changing the traditional pacemaker concept and preferred for some patients. Ongoing research is needed to better assess their long-term function, safety, and end-of-life strategies. In the future, multichamber leadless pacing is expected to be developed and perhaps obviating the need for transvenous leads and their associated complications.

  5. Stabilization of the μ-opioid receptor by truncated single transmembrane splice variants through a chaperone-like action.

    PubMed

    Xu, Jin; Xu, Ming; Brown, Taylor; Rossi, Grace C; Hurd, Yasmin L; Inturrisi, Charles E; Pasternak, Gavril W; Pan, Ying-Xian

    2013-07-19

    The μ-opioid receptor gene, OPRM1, undergoes extensive alternative pre-mRNA splicing, as illustrated by the identification of an array of splice variants generated by both 5' and 3' alternative splicing. The current study reports the identification of another set of splice variants conserved across species that are generated through exon skipping or insertion that encodes proteins containing only a single transmembrane (TM) domain. Using a Tet-Off system, we demonstrated that the truncated single TM variants can dimerize with the full-length 7-TM μ-opioid receptor (MOR-1) in the endoplasmic reticulum, leading to increased expression of MOR-1 at the protein level by a chaperone-like function that minimizes endoplasmic reticulum-associated degradation. In vivo antisense studies suggested that the single TM variants play an important role in morphine analgesia, presumably through modulation of receptor expression levels. Our studies suggest the functional roles of truncated receptors in other G protein-coupled receptor families.

  6. Integrating Multibody Simulation and CFD: toward Complex Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Pieri, Stefano; Poloni, Carlo; Mühlmeier, Martin

    This paper describes the use of integrated multidisciplinary analysis and optimization of a race car model on a predefined circuit. The objective is the definition of the most efficient geometric configuration that can guarantee the lowest lap time. In order to carry out this study it has been necessary to interface the design optimization software modeFRONTIER with the following softwares: CATIA v5, a three dimensional CAD software, used for the definition of the parametric geometry; A.D.A.M.S./Motorsport, a multi-body dynamic simulation software; IcemCFD, a mesh generator, for the automatic generation of the CFD grid; CFX, a Navier-Stokes code, for the fluid-dynamic forces prediction. The process integration gives the possibility to compute, for each geometrical configuration, a set of aerodynamic coefficients that are then used in the multiboby simulation for the computation of the lap time. Finally an automatic optimization procedure is started and the lap-time minimized. The whole process is executed on a Linux cluster running CFD simulations in parallel.

  7. WASTE MINIMIZATION AUDIT REPORT: CASE STUDIES OF MINIMIZATION OF SOLVENT WASTES AND ELECTROPLATING WASTES AT A DOD (DEPARTMENT OF DEFENSE) INSTALLATION

    EPA Science Inventory

    The report results of a waste minimization audit carried out in 1987 at a tank reconditioning facility operated by the DOD. The audit team developed recommendations for reducing the generation FOO6 wastewater treatment sludge, and FOO2, and FOO4 solvent wastes. In addition to det...

  8. New developments in FEYNRULES

    NASA Astrophysics Data System (ADS)

    Alloul, Adam; Christensen, Neil D.; Degrande, Céline; Duhr, Claude; Fuks, Benjamin

    2014-06-01

    The program FEYNRULES is a MATHEMATICA package developed to facilitate the implementation of new physics theories into high-energy physics tools. Starting from a minimal set of information such as the model gauge symmetries, its particle content, parameters and Lagrangian, FEYNRULES provides all necessary routines to extract automatically from the Lagrangian (that can also be computed semi-automatically for supersymmetric theories) the associated Feynman rules. These can be further exported to several Monte Carlo event generators through dedicated interfaces, as well as translated into a PYTHON library, under the so-called UFO model format, agnostic of the model complexity, especially in terms of Lorentz and/or color structures appearing in the vertices or of number of external legs. In this work, we briefly report on the most recent new features that have been added to FEYNRULES, including full support for spin-1 fermions, a new module allowing for the automated diagonalization of the particle spectrum and a new set of routines dedicated to decay width calculations.

  9. Rapid construction of a whole-genome transposon insertion collection for Shewanella oneidensis by Knockout Sudoku.

    PubMed

    Baym, Michael; Shaket, Lev; Anzai, Isao A; Adesina, Oluwakemi; Barstow, Buz

    2016-11-10

    Whole-genome knockout collections are invaluable for connecting gene sequence to function, yet traditionally, their construction has required an extraordinary technical effort. Here we report a method for the construction and purification of a curated whole-genome collection of single-gene transposon disruption mutants termed Knockout Sudoku. Using simple combinatorial pooling, a highly oversampled collection of mutants is condensed into a next-generation sequencing library in a single day, a 30- to 100-fold improvement over prior methods. The identities of the mutants in the collection are then solved by a probabilistic algorithm that uses internal self-consistency within the sequencing data set, followed by rapid algorithmically guided condensation to a minimal representative set of mutants, validation, and curation. Starting from a progenitor collection of 39,918 mutants, we compile a quality-controlled knockout collection of the electroactive microbe Shewanella oneidensis MR-1 containing representatives for 3,667 genes that is functionally validated by high-throughput kinetic measurements of quinone reduction.

  10. Variational Trajectory Optimization Tool Set: Technical description and user's manual

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Queen, Eric M.; Cavanaugh, Michael D.; Wetzel, Todd A.; Moerder, Daniel D.

    1993-01-01

    The algorithms that comprise the Variational Trajectory Optimization Tool Set (VTOTS) package are briefly described. The VTOTS is a software package for solving nonlinear constrained optimal control problems from a wide range of engineering and scientific disciplines. The VTOTS package was specifically designed to minimize the amount of user programming; in fact, for problems that may be expressed in terms of analytical functions, the user needs only to define the problem in terms of symbolic variables. This version of the VTOTS does not support tabular data; thus, problems must be expressed in terms of analytical functions. The VTOTS package consists of two methods for solving nonlinear optimal control problems: a time-domain finite-element algorithm and a multiple shooting algorithm. These two algorithms, under the VTOTS package, may be run independently or jointly. The finite-element algorithm generates approximate solutions, whereas the shooting algorithm provides a more accurate solution to the optimization problem. A user's manual, some examples with results, and a brief description of the individual subroutines are included.

  11. A general algorithm for peak-tracking in multi-dimensional NMR experiments.

    PubMed

    Ravel, P; Kister, G; Malliavin, T E; Delsuc, M A

    2007-04-01

    We present an algorithmic method allowing automatic tracking of NMR peaks in a series of spectra. It consists in a two phase analysis. The first phase is a local modeling of the peak displacement between two consecutive experiments using distance matrices. Then, from the coefficients of these matrices, a value graph containing the a priori set of possible paths used by these peaks is generated. On this set, the minimization under constraint of the target function by a heuristic approach provides a solution to the peak-tracking problem. This approach has been named GAPT, standing for General Algorithm for NMR Peak Tracking. It has been validated in numerous simulations resembling those encountered in NMR spectroscopy. We show the robustness and limits of the method for situations with many peak-picking errors, and presenting a high local density of peaks. It is then applied to the case of a temperature study of the NMR spectrum of the Lipid Transfer Protein (LTP).

  12. Realistic Data-Driven Traffic Flow Animation Using Texture Synthesis.

    PubMed

    Chao, Qianwen; Deng, Zhigang; Ren, Jiaping; Ye, Qianqian; Jin, Xiaogang

    2018-02-01

    We present a novel data-driven approach to populate virtual road networks with realistic traffic flows. Specifically, given a limited set of vehicle trajectories as the input samples, our approach first synthesizes a large set of vehicle trajectories. By taking the spatio-temporal information of traffic flows as a 2D texture, the generation of new traffic flows can be formulated as a texture synthesis process, which is solved by minimizing a newly developed traffic texture energy. The synthesized output captures the spatio-temporal dynamics of the input traffic flows, and the vehicle interactions in it strictly follow traffic rules. After that, we position the synthesized vehicle trajectory data to virtual road networks using a cage-based registration scheme, where a few traffic-specific constraints are enforced to maintain each vehicle's original spatial location and synchronize its motion in concert with its neighboring vehicles. Our approach is intuitive to control and scalable to the complexity of virtual road networks. We validated our approach through many experiments and paired comparison user studies.

  13. Method and apparatus for generating motor current spectra to enhance motor system fault detection

    DOEpatents

    Linehan, D.J.; Bunch, S.L.; Lyster, C.T.

    1995-10-24

    A method and circuitry are disclosed for sampling periodic amplitude modulations in a nonstationary periodic carrier wave to determine frequencies in the amplitude modulations. The method and circuit are described in terms of an improved motor current signature analysis. The method insures that the sampled data set contains an exact whole number of carrier wave cycles by defining the rate at which samples of motor current data are collected. The circuitry insures that a sampled data set containing stationary carrier waves is recreated from the analog motor current signal containing nonstationary carrier waves by conditioning the actual sampling rate to adjust with the frequency variations in the carrier wave. After the sampled data is transformed to the frequency domain via the Discrete Fourier Transform, the frequency distribution in the discrete spectra of those components due to the carrier wave and its harmonics will be minimized so that signals of interest are more easily analyzed. 29 figs.

  14. Analysis of positron lifetime spectra in polymers

    NASA Technical Reports Server (NTRS)

    Singh, Jag J.; Mall, Gerald H.; Sprinkle, Danny R.

    1988-01-01

    A new procedure for analyzing multicomponent positron lifetime spectra in polymers was developed. It requires initial estimates of the lifetimes and the intensities of various components, which are readily obtainable by a standard spectrum stripping process. These initial estimates, after convolution with the timing system resolution function, are then used as the inputs for a nonlinear least squares analysis to compute the estimates that conform to a global error minimization criterion. The convolution integral uses the full experimental resolution function, in contrast to the previous studies where analytical approximations of it were utilized. These concepts were incorporated into a generalized Computer Program for Analyzing Positron Lifetime Spectra (PAPLS) in polymers. Its validity was tested using several artificially generated data sets. These data sets were also analyzed using the widely used POSITRONFIT program. In almost all cases, the PAPLS program gives closer fit to the input values. The new procedure was applied to the analysis of several lifetime spectra measured in metal ion containing Epon-828 samples. The results are described.

  15. A novel hybrid genetic algorithm to solve the make-to-order sequence-dependent flow-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Mirabi, Mohammad; Fatemi Ghomi, S. M. T.; Jolai, F.

    2014-04-01

    Flow-shop scheduling problem (FSP) deals with the scheduling of a set of n jobs that visit a set of m machines in the same order. As the FSP is NP-hard, there is no efficient algorithm to reach the optimal solution of the problem. To minimize the holding, delay and setup costs of large permutation flow-shop scheduling problems with sequence-dependent setup times on each machine, this paper develops a novel hybrid genetic algorithm (HGA) with three genetic operators. Proposed HGA applies a modified approach to generate a pool of initial solutions, and also uses an improved heuristic called the iterated swap procedure to improve the initial solutions. We consider the make-to-order production approach that some sequences between jobs are assumed as tabu based on maximum allowable setup cost. In addition, the results are compared to some recently developed heuristics and computational experimental results show that the proposed HGA performs very competitively with respect to accuracy and efficiency of solution.

  16. The structure of tropical forests and sphere packings

    PubMed Central

    Jahn, Markus Wilhelm; Dobner, Hans-Jürgen; Wiegand, Thorsten; Huth, Andreas

    2015-01-01

    The search for simple principles underlying the complex architecture of ecological communities such as forests still challenges ecological theorists. We use tree diameter distributions—fundamental for deriving other forest attributes—to describe the structure of tropical forests. Here we argue that tree diameter distributions of natural tropical forests can be explained by stochastic packing of tree crowns representing a forest crown packing system: a method usually used in physics or chemistry. We demonstrate that tree diameter distributions emerge accurately from a surprisingly simple set of principles that include site-specific tree allometries, random placement of trees, competition for space, and mortality. The simple static model also successfully predicted the canopy structure, revealing that most trees in our two studied forests grow up to 30–50 m in height and that the highest packing density of about 60% is reached between the 25- and 40-m height layer. Our approach is an important step toward identifying a minimal set of processes responsible for generating the spatial structure of tropical forests. PMID:26598678

  17. Optimal strategy analysis based on robust predictive control for inventory system with random demand

    NASA Astrophysics Data System (ADS)

    Saputra, Aditya; Widowati, Sutrisno

    2017-12-01

    In this paper, the optimal strategy for a single product single supplier inventory system with random demand is analyzed by using robust predictive control with additive random parameter. We formulate the dynamical system of this system as a linear state space with additive random parameter. To determine and analyze the optimal strategy for the given inventory system, we use robust predictive control approach which gives the optimal strategy i.e. the optimal product volume that should be purchased from the supplier for each time period so that the expected cost is minimal. A numerical simulation is performed with some generated random inventory data. We simulate in MATLAB software where the inventory level must be controlled as close as possible to a set point decided by us. From the results, robust predictive control model provides the optimal strategy i.e. the optimal product volume that should be purchased and the inventory level was followed the given set point.

  18. An optimized proportional-derivative controller for the human upper extremity with gravity.

    PubMed

    Jagodnik, Kathleen M; Blana, Dimitra; van den Bogert, Antonie J; Kirsch, Robert F

    2015-10-15

    When Functional Electrical Stimulation (FES) is used to restore movement in subjects with spinal cord injury (SCI), muscle stimulation patterns should be selected to generate accurate and efficient movements. Ideally, the controller for such a neuroprosthesis will have the simplest architecture possible, to facilitate translation into a clinical setting. In this study, we used the simulated annealing algorithm to optimize two proportional-derivative (PD) feedback controller gain sets for a 3-dimensional arm model that includes musculoskeletal dynamics and has 5 degrees of freedom and 22 muscles, performing goal-oriented reaching movements. Controller gains were optimized by minimizing a weighted sum of position errors, orientation errors, and muscle activations. After optimization, gain performance was evaluated on the basis of accuracy and efficiency of reaching movements, along with three other benchmark gain sets not optimized for our system, on a large set of dynamic reaching movements for which the controllers had not been optimized, to test ability to generalize. Robustness in the presence of weakened muscles was also tested. The two optimized gain sets were found to have very similar performance to each other on all metrics, and to exhibit significantly better accuracy, compared with the three standard gain sets. All gain sets investigated used physiologically acceptable amounts of muscular activation. It was concluded that optimization can yield significant improvements in controller performance while still maintaining muscular efficiency, and that optimization should be considered as a strategy for future neuroprosthesis controller design. Published by Elsevier Ltd.

  19. Open set recognition of aircraft in aerial imagery using synthetic template models

    NASA Astrophysics Data System (ADS)

    Bapst, Aleksander B.; Tran, Jonathan; Koch, Mark W.; Moya, Mary M.; Swahn, Robert

    2017-05-01

    Fast, accurate and robust automatic target recognition (ATR) in optical aerial imagery can provide game-changing advantages to military commanders and personnel. ATR algorithms must reject non-targets with a high degree of confidence in a world with an infinite number of possible input images. Furthermore, they must learn to recognize new targets without requiring massive data collections. Whereas most machine learning algorithms classify data in a closed set manner by mapping inputs to a fixed set of training classes, open set recognizers incorporate constraints that allow for inputs to be labelled as unknown. We have adapted two template-based open set recognizers to use computer generated synthetic images of military aircraft as training data, to provide a baseline for military-grade ATR: (1) a frequentist approach based on probabilistic fusion of extracted image features, and (2) an open set extension to the one-class support vector machine (SVM). These algorithms both use histograms of oriented gradients (HOG) as features as well as artificial augmentation of both real and synthetic image chips to take advantage of minimal training data. Our results show that open set recognizers trained with synthetic data and tested with real data can successfully discriminate real target inputs from non-targets. However, there is still a requirement for some knowledge of the real target in order to calibrate the relationship between synthetic template and target score distributions. We conclude by proposing algorithm modifications that may improve the ability of synthetic data to represent real data.

  20. The EULAR Scleroderma Trials and Research Group (EUSTAR): an international framework for accelerating scleroderma research.

    PubMed

    Tyndall, Alan; Ladner, Ulf M; Matucci-Cerinic, Marco

    2008-11-01

    Systemic sclerosis has a complex pathogenesis and a multifaceted clinical spectrum without a specific treatment. Under the auspices of the European League Against Rheumatism, the European League Against Rheumatism Scleroderma Trials And Research group (EUSTAR) has been founded in Europe to foster the study of systemic sclerosis with the aim of achieving equality of assessment and care of systemic sclerosis patients throughout the world according to evidence-based principles. EUSTAR created the minimal essential data set, a simple two-page form with basic demographics and mostly yes/no answers to clinical and laboratory parameters, to track patients throughout Europe. Currently, over 7000 patients are registered from 150 centres in four continents, and several articles have been published with the data generated by the minimal essential data set. A commitment of EUSTAR is also to teaching and educating, and for this reason there are two teaching courses and a third is planned for early in 2009. These courses have built international networks among young investigators improving the quality of multicentre clinical trials. EUSTAR has organized several rounds of 'teach the teachers' to further standardize the skin scoring. EUSTAR activities have extended beyond European borders, and EUSTAR now includes experts from several nations. The growth of data and biomaterial might ensure many further fruitful multicentre studies, but the financial sustainability of EUSTAR remains an issue that may jeopardize the existence of this group as well as that of other organizations in the world.

  1. Development of a global gridded Argo data set with Barnes successive corrections

    NASA Astrophysics Data System (ADS)

    Li, Hong; Xu, Fanghua; Zhou, Wei; Wang, Dongxiao; Wright, Jonathon S.; Liu, Zenghong; Lin, Yanluan

    2017-02-01

    A new 11 year (2004-2014) monthly 1° gridded Argo temperature and salinity data set with 49 vertical levels from the surface to 1950 m depth (named BOA-Argo) is generated for use in ocean research and modeling studies. The data set is produced based on refined Barnes successive corrections by adopting flexible response functions based on a series of error analyses to minimize errors induced by nonuniform spatial distribution of Argo observations. These response functions allow BOA-Argo to capture a greater portion of mesoscale and large-scale signals while compressing small-sale and high-frequency noise relative to the most recent version of the World Ocean Atlas (WOA). BOA-Argo data set is evaluated against other gridded data sets, such as WOA13, Roemmich-Argo, Jamestec-Argo, EN4-Argo, and IPRC-Argo in terms of climatology, independent observations, mixed-layer depth, and so on. Generally, BOA-Argo compares well with other Argo gridded data sets. The RMSEs and correlation coefficients of compared variables from BOA-Argo agree most with those from the Roemmich-Argo. In particular, more mesoscale features are retained in BOA-Argo than others as compared to satellite sea surface heights. These results indicate that the BOA-Argo data set is a useful and promising adding to the current Argo data sets. The proposed refined Barnes method is computationally simple and efficient, so that the BOA-Argo data set can be easily updated to keep pace with tremendous daily increases in the volume of Argo temperature and salinity data.

  2. Distributed query plan generation using multiobjective genetic algorithm.

    PubMed

    Panicker, Shina; Kumar, T V Vijay

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability.

  3. Distributed Query Plan Generation Using Multiobjective Genetic Algorithm

    PubMed Central

    Panicker, Shina; Vijay Kumar, T. V.

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability. PMID:24963513

  4. Selecting Processes to Minimize Hexavalent Chromium from Stainless Steel Welding

    PubMed Central

    KEANE, M.; SIERT, A.; STONE, S.; CHEN, B.; SLAVEN, J.; CUMPSTON, A.; ANTONINI, J.

    2015-01-01

    Eight welding processes/shielding gas combinations were assessed for generation of hexavalent chromium (Cr6+) in stainless steel welding fumes. The processes examined were gas metal arc welding (GMAW) (axial spray, short circuit, and pulsed spray modes), flux cored arc welding (FCAW), and shielded metal arc welding (SMAW). The Cr6+ fractions were measured in the fumes; fume generation rates, Cr6+ generation rates, and Cr6+ generation rates per unit mass of welding wire were determined. A limited controlled comparison study was done in a welding shop including SMAW, FCAW, and three GMAW methods. The processes studied were compared for costs, including relative labor costs. Results indicate the Cr6+ in the fume varied widely, from a low of 2800 to a high of 34,000 ppm. Generation rates of Cr6+ ranged from 69 to 7800 μg/min, and Cr6+ generation rates per unit of wire ranged from 1 to 270 μg/g. The results of field study were similar to the findings in the laboratory. The Cr6+ (ppm) in the fume did not necessarily correlate with the Cr6+ generation rate. Physical properties were similar for the processes, with mass median aerodynamic diameters ranging from 250 to 336 nm, while the FCAW and SMAW fumes were larger (360 and 670 nm, respectively). Conclusion: The pulsed axial spray method was the best choice of the processes studied based on minimal fume generation, minimal Cr6+ generation, and cost per weld. This method is usable in any position, has a high metal deposition rate, and is relatively simple to learn and use. PMID:26690276

  5. Optimal design and dispatch of a system of diesel generators, photovoltaics and batteries for remote locations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scioletti, Michael S.; Newman, Alexandra M.; Goodman, Johanna K.

    Renewable energy technologies, specifically, solar photovoltaic cells, combined with battery storage and diesel generators, form a hybrid system capable of independently powering remote locations, i.e., those isolated from larger grids. If sized correctly, hybrid systems reduce fuel consumption compared to diesel generator-only alternatives. We present an optimization model for establishing a hybrid power design and dispatch strategy for remote locations, such as a military forward operating base, that models the acquisition of different power technologies as integer variables and their operation using nonlinear expressions. Our cost-minimizing, nonconvex, mixed-integer, nonlinear program contains a detailed battery model. Due to its complexities, wemore » present linearizations, which include exact and convex under-estimation techniques, and a heuristic, which determines an initial feasible solution to serve as a “warm start” for the solver. We determine, in a few hours at most, solutions within 5% of optimality for a candidate set of technologies; these solutions closely resemble those from the nonlinear model. Lastly, our instances contain real data spanning a yearly horizon at hour fidelity and demonstrate that a hybrid system could reduce fuel consumption by as much as 50% compared to a generator-only solution.« less

  6. Optimal design and dispatch of a system of diesel generators, photovoltaics and batteries for remote locations

    DOE PAGES

    Scioletti, Michael S.; Newman, Alexandra M.; Goodman, Johanna K.; ...

    2017-05-08

    Renewable energy technologies, specifically, solar photovoltaic cells, combined with battery storage and diesel generators, form a hybrid system capable of independently powering remote locations, i.e., those isolated from larger grids. If sized correctly, hybrid systems reduce fuel consumption compared to diesel generator-only alternatives. We present an optimization model for establishing a hybrid power design and dispatch strategy for remote locations, such as a military forward operating base, that models the acquisition of different power technologies as integer variables and their operation using nonlinear expressions. Our cost-minimizing, nonconvex, mixed-integer, nonlinear program contains a detailed battery model. Due to its complexities, wemore » present linearizations, which include exact and convex under-estimation techniques, and a heuristic, which determines an initial feasible solution to serve as a “warm start” for the solver. We determine, in a few hours at most, solutions within 5% of optimality for a candidate set of technologies; these solutions closely resemble those from the nonlinear model. Lastly, our instances contain real data spanning a yearly horizon at hour fidelity and demonstrate that a hybrid system could reduce fuel consumption by as much as 50% compared to a generator-only solution.« less

  7. Energy from Biomass for Sustainable Cities

    NASA Astrophysics Data System (ADS)

    Panepinto, D.; Zanetti, M. C.; Gitelman, L.; Kozhevnikov, M.; Magaril, E.; Magaril, R.

    2017-06-01

    One of the major challenges of sustainable urban development is ensuring a sustainable energy supply while minimizing negative environmental impacts. The European Union Directive 2009/28/EC has set a goal of obtaining 20 percent of all energy from renewable sources by 2020. In this context, it is possible to consider the use of residues from forest maintenance, residues from livestock, the use of energy crops, the recovery of food waste, and residuals from agro-industrial activities. At the same time, it is necessary to consider the consequent environmental impact. In this paper an approach in order to evaluate the environmental compatibility has presented. The possibilities of national priorities for commissioning of power plants on biofuel and other facilities of distributed generation are discussed.

  8. Improving the performance of minimizers and winnowing schemes.

    PubMed

    Marçais, Guillaume; Pellow, David; Bork, Daniel; Orenstein, Yaron; Shamir, Ron; Kingsford, Carl

    2017-07-15

    The minimizers scheme is a method for selecting k -mers from sequences. It is used in many bioinformatics software tools to bin comparable sequences or to sample a sequence in a deterministic fashion at approximately regular intervals, in order to reduce memory consumption and processing time. Although very useful, the minimizers selection procedure has undesirable behaviors (e.g. too many k -mers are selected when processing certain sequences). Some of these problems were already known to the authors of the minimizers technique, and the natural lexicographic ordering of k -mers used by minimizers was recognized as their origin. Many software tools using minimizers employ ad hoc variations of the lexicographic order to alleviate those issues. We provide an in-depth analysis of the effect of k -mer ordering on the performance of the minimizers technique. By using small universal hitting sets (a recently defined concept), we show how to significantly improve the performance of minimizers and avoid some of its worse behaviors. Based on these results, we encourage bioinformatics software developers to use an ordering based on a universal hitting set or, if not possible, a randomized ordering, rather than the lexicographic order. This analysis also settles negatively a conjecture (by Schleimer et al. ) on the expected density of minimizers in a random sequence. The software used for this analysis is available on GitHub: https://github.com/gmarcais/minimizers.git . gmarcais@cs.cmu.edu or carlk@cs.cmu.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  9. A Prediction Error-driven Retrieval Procedure for Destabilizing and Rewriting Maladaptive Reward Memories in Hazardous Drinkers

    PubMed Central

    Das, Ravi K.; Gale, Grace; Hennessy, Vanessa; Kamboj, Sunjeev K.

    2018-01-01

    Maladaptive reward memories (MRMs) can become unstable following retrieval under certain conditions, allowing their modification by subsequent new learning. However, robust (well-rehearsed) and chronologically old MRMs, such as those underlying substance use disorders, do not destabilize easily when retrieved. A key determinate of memory destabilization during retrieval is prediction error (PE). We describe a retrieval procedure for alcohol MRMs in hazardous drinkers that specifically aims to maximize the generation of PE and therefore the likelihood of MRM destabilization. The procedure requires explicitly generating the expectancy of alcohol consumption and then violating this expectancy (withholding alcohol) following the presentation of a brief set of prototypical alcohol cue images (retrieval + PE). Control procedures involve presenting the same cue images, but allow alcohol to be consumed, generating minimal PE (retrieval-no PE) or generate PE without retrieval of alcohol MRMs, by presenting orange juice cues (no retrieval + PE). Subsequently, we describe a multisensory disgust-based counterconditioning procedure to probe MRM destabilization by re-writing alcohol cue-reward associations prior to reconsolidation. This procedure pairs alcohol cues with images invoking pathogen disgust and an extremely bitter-tasting solution (denatonium benzoate), generating gustatory disgust. Following retrieval + PE, but not no retrieval + PE or retrieval-no PE, counterconditioning produces evidence of MRM rewriting as indexed by lasting reductions in alcohol cue valuation, attentional capture, and alcohol craving. PMID:29364255

  10. Protocol for the development of a Core Outcome Set (COS) for hemorrhoidal disease: an international Delphi study.

    PubMed

    van Tol, R R; Melenhorst, J; Dirksen, C D; Stassen, L P S; Breukink, S O

    2017-07-01

    Over the last decade, many studies were performed regarding treatment options for hemorrhoidal disease. Randomised controlled trials (RCTs) should have well-defined primary and secondary outcomes. However, the reported outcome measures are numerous and diverse. The heterogeneity of outcome definition in clinical trials limits transparency and paves the way for bias. The development of a core outcome set (COS) helps minimizing this problem. A COS is an agreed minimum set of outcomes that should be measured and reported in all clinical trials of a specific disease. The aim of this project is to generate a COS regarding the outcome of treatment after hemorrhoidal disease. A Delphi study will be performed by an international steering group healthcare professionals and patients with the intention to create a standard outcome set for future clinical trials for the treatment of hemorrhoidal disease. First, a literature review will be conducted to establish which outcomes are used in clinical trials for hemorrhoidal disease. Secondly, both healthcare professionals and patients will participate in several consecutive rounds of online questionnaires and a face-to-face meeting to refine the content of the COS. Development of a COS for hemorrhoidal disease defines a minimum outcome-reporting standard and will improve the quality of research in the future.

  11. Minimal Risk in Pediatric Research: A Philosophical Review and Reconsideration

    PubMed Central

    Rossi, John; Nelson, Robert M.

    2017-01-01

    Despite more than thirty years of debate, disagreement persists among research ethicists about the most appropriate way to interpret the U.S. regulations on pediatric research, specifically the categories of “minimal risk” and a “minor increase over minimal risk.” Focusing primarily on the definition of “minimal risk,” we argue in this article that the continued debate about the pediatric risk categories is at least partly because their conceptual status is seldom considered directly. Once this is done, it becomes clear that the most popular strategy for interpreting “minimal risk”—defining it as a specific set of risks—is indefensible and, from a pragmatic perspective, unlikely to resolve disagreement. Primarily this is because judgments about minimal risk are both normative and heavily intuitive in nature and thus cannot easily be captured by reductions to a given set of risks. We suggest instead that a more defensible approach to evaluating risk should incorporate room for reflection and deliberation. This dispositional, deliberative framework can nonetheless accommodate a number of intellectual resources for reducing reliance on sheer intuition and improving the quality of risk evaluations. PMID:28777661

  12. First-Generation Students: Identifying Barriers to Academic Persistence

    ERIC Educational Resources Information Center

    Godwin, Angela Felicia

    2012-01-01

    First-generation students are more likely than non-first-generation students to depart from a postsecondary institution before a degree is attained. Factors that could impact academic persistence among first-generation students include low self-efficacy, lack of financial resources and parental support, poor college planning, and minimal school…

  13. Strategie de commande optimale de la production electrique dans un site isole

    NASA Astrophysics Data System (ADS)

    Barris, Nicolas

    Hydro-Quebec manages more than 20 isolated power grids all over the province. The grids are located in small villages where the electricity demand is rather small. Those villages being far away from each other and from the main electricity production facilities, energy is produced locally using diesel generators. Electricity production costs at the isolated power grids are very important due to elevated diesel prices and transportation costs. However, the price of electricity is the same for the entire province, with no regards to the production costs of the electricity consumed. These two factors combined result in yearly exploitation losses for Hydro-Quebec. For any given village, several diesel generators are required to satisfy the demand. When the load increases, it becomes necessary to increase the capacity either by adding a generator to the production or by switching to a more powerful generator. The same thing happens when the load decreases. Every decision regarding changes in the production is included in the control strategy, which is based on predetermined parameters. These parameters were specified according to empirical studies and the knowledge base of the engineers managing the isolated power grids, but without any optimisation approach. The objective of the presented work is to minimize the diesel consumption by optimizing the parameters included in the control strategy. Its impact would be to limit the exploitation losses generated by the isolated power grids and the CO2 equivalent emissions without adding new equipment or completely changing the nature of the strategy. To satisfy this objective, the isolated power grid simulator OPERA is used along with the optimization library NOMAD and the data of three villages in northern Quebec. The preliminary optimization instance for the first village showed that some modifications to the existing control strategy must be done to better achieve the minimization objective. The main optimization processes consist of three different optimization approaches: the optimization of one set of parameters for all the villages, the optimization of one set of parameters per village, and the optimization of one set of parameters per diesel generator configuration per village. In the first scenario, the optimization of one set of parameters for all the villages leads to compromises for all three villages without allowing a full potential reduction for any village. Therefore, it is proven that applying one set of parameters to all the villages is not suitable for finding an optimal solution. In the second scenario, the optimization of one set of parameters per village allows an improvement over the previous results. At this point, it is shown that it is crucial to remove from the production the less efficient configurations when they are next to more efficient configurations. In the third scenario, the optimization of one set of parameters per configuration per village requires a very large number of function evaluations but does not result in any satisfying solution. In order to improve the performance of the optimization, it has been decided that the problem structure would be used. Two different approaches are considered: optimizing one set of parameters at a time and optimizing different rules included in the control strategy one at a time. In both cases, results are similar but calculation costs differ, the second method being much more cost efficient. The optimal values of the ultimate rules parameters can be directly linked to the efficient transition points that favor an efficient operation of the isolated power grids. Indeed, these transition points are defined in such a way that the high efficiency zone of every configuration is used. Therefore, it seems possible to directly identify on the graphs these optimal transition points and define the parameters in the control strategy without even having to run any optimization process. The diesel consumption reduction for all three villages is about 1.9%. Considering elevated diesel costs and the existence of about 20 other isolated power grids, the use of the developed methods together with a calibration of OPERA would allow a substantial reduction of Hydro-Quebec's annual deficit. Also, since one of the developed methods is very cost effective and produces equivalent results, it could be possible to use it during other processes; for example, when buying new equipment for the grid it could be possible to assess its full potential, under an optimized control strategy, and improve the net present value.

  14. The theory of maximally and minimally even sets, the one- dimensional antiferromagnetic Ising model, and the continued fraction compromise of musical scales

    NASA Astrophysics Data System (ADS)

    Douthett, Elwood (Jack) Moser, Jr.

    1999-10-01

    Cyclic configurations of white and black sites, together with convex (concave) functions used to weight path length, are investigated. The weights of the white set and black set are the sums of the weights of the paths connecting the white sites and black sites, respectively, and the weight between sets is the sum of the weights of the paths that connect sites opposite in color. It is shown that when the weights of all configurations of a fixed number of white and a fixed number of black sites are compared, minimum (maximum) weight of a white set, minimum (maximum) weight of the a black set, and maximum (minimum) weight between sets occur simultaneously. Such configurations are called maximally even configurations. Similarly, the configurations whose weights are the opposite extremes occur simultaneously and are called minimally even configurations. Algorithms that generate these configurations are constructed and applied to the one- dimensional antiferromagnetic spin-1/2 Ising model. Next the goodness of continued fractions as applied to musical intervals (frequency ratios and their base 2 logarithms) is explored. It is shown that, for the intermediate convergents between two consecutive principal convergents of an irrational number, the first half of the intermediate convergents are poorer approximations than the preceding principal convergent while the second half are better approximations; the goodness of a middle intermediate convergent can only be determined by calculation. These convergents are used to determine what equal-tempered systems have intervals that most closely approximate the musical fifth (pn/ qn = log2(3/2)). The goodness of exponentiated convergents ( 2pn/qn~3/2 ) is also investigated. It is shown that, with the exception of a middle convergent, the goodness of the exponential form agrees with that of its logarithmic Counterpart As in the case of the logarithmic form, the goodness of a middle intermediate convergent in the exponential form can only be determined by calculation. A Desirability Function is constructed that simultaneously measures how well multiple intervals fit in a given equal-tempered system. These measurements are made for octave (base 2) and tritave systems (base 3). Combinatorial properties important to music modulation are considered. These considerations lead These considerations lead to the construction of maximally even scales as partitions of an equal-tempered system.

  15. Chain pooling to minimize prediction error in subset regression. [Monte Carlo studies using population models

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1974-01-01

    Monte Carlo studies using population models intended to represent response surface applications are reported. Simulated experiments were generated by adding pseudo random normally distributed errors to population values to generate observations. Model equations were fitted to the observations and the decision procedure was used to delete terms. Comparison of values predicted by the reduced models with the true population values enabled the identification of deletion strategies that are approximately optimal for minimizing prediction errors.

  16. Modeling and Optimization of Coordinative Operation of Hydro-wind-photovoltaic Considering Power Generation and Output Fluctuation

    NASA Astrophysics Data System (ADS)

    Wang, Xianxun; Mei, Yadong

    2017-04-01

    Coordinative operation of hydro-wind-photovoltaic is the solution of mitigating the conflict of power generation and output fluctuation of new energy and conquering the bottleneck of new energy development. Due to the deficiencies of characterizing output fluctuation, depicting grid construction and disposal of power abandon, the research of coordinative mechanism is influenced. In this paper, the multi-object and multi-hierarchy model of coordinative operation of hydro-wind-photovoltaic is built with the aim of maximizing power generation and minimizing output fluctuation and the constraints of topotaxy of power grid and balanced disposal of power abandon. In the case study, the comparison of uncoordinative and coordinative operation is carried out with the perspectives of power generation, power abandon and output fluctuation. By comparison from power generation, power abandon and output fluctuation between separate operation and coordinative operation of multi-power, the coordinative mechanism is studied. Compared with running solely, coordinative operation of hydro-wind-photovoltaic can gain the compensation benefits. Peak-alternation operation reduces the power abandon significantly and maximizes resource utilization effectively by compensating regulation of hydropower. The Pareto frontier of power generation and output fluctuation is obtained through multiple-objective optimization. It clarifies the relationship of mutual influence between these two objects. When coordinative operation is taken, output fluctuation can be markedly reduced at the cost of a slight decline of power generation. The power abandon also drops sharply compared with operating separately. Applying multi-objective optimization method to optimize the coordinate operation, Pareto optimal solution set of power generation and output fluctuation is achieved.

  17. Selecting Processes to Minimize Hexavalent Chromium from Stainless Steel Welding: Eight welding processes/shielding gas combinations were assessed for generation of hexavalent chromium in stainless steel welding fumes.

    PubMed

    Keane, M; Siert, A; Stone, S; Chen, B; Slaven, J; Cumpston, A; Antonini, J

    2012-09-01

    Eight welding processes/shielding gas combinations were assessed for generation of hexavalent chromium (Cr 6+ ) in stainless steel welding fumes. The processes examined were gas metal arc welding (GMAW) (axial spray, short circuit, and pulsed spray modes), flux cored arc welding (FCAW), and shielded metal arc welding (SMAW). The Cr 6+ fractions were measured in the fumes; fume generation rates, Cr 6+ generation rates, and Cr 6+ generation rates per unit mass of welding wire were determined. A limited controlled comparison study was done in a welding shop including SMAW, FCAW, and three GMAW methods. The processes studied were compared for costs, including relative labor costs. Results indicate the Cr 6+ in the fume varied widely, from a low of 2800 to a high of 34,000 ppm. Generation rates of Cr 6+ ranged from 69 to 7800 μg/min, and Cr 6+ generation rates per unit of wire ranged from 1 to 270 μg/g. The results of field study were similar to the findings in the laboratory. The Cr 6+ (ppm) in the fume did not necessarily correlate with the Cr 6+ generation rate. Physical properties were similar for the processes, with mass median aerodynamic diameters ranging from 250 to 336 nm, while the FCAW and SMAW fumes were larger (360 and 670 nm, respectively). The pulsed axial spray method was the best choice of the processes studied based on minimal fume generation, minimal Cr 6+ generation, and cost per weld. This method is usable in any position, has a high metal deposition rate, and is relatively simple to learn and use.

  18. Simple and efficient LCAO basis sets for the diffuse states in carbon nanostructures.

    PubMed

    Papior, Nick R; Calogero, Gaetano; Brandbyge, Mads

    2018-06-27

    We present a simple way to describe the lowest unoccupied diffuse states in carbon nanostructures in density functional theory calculations using a minimal LCAO (linear combination of atomic orbitals) basis set. By comparing plane wave basis calculations, we show how these states can be captured by adding long-range orbitals to the standard LCAO basis sets for the extreme cases of planar sp 2 (graphene) and curved carbon (C 60 ). In particular, using Bessel functions with a long range as additional basis functions retain a minimal basis size. This provides a smaller and simpler atom-centered basis set compared to the standard pseudo-atomic orbitals (PAOs) with multiple polarization orbitals or by adding non-atom-centered states to the basis.

  19. Simple and efficient LCAO basis sets for the diffuse states in carbon nanostructures

    NASA Astrophysics Data System (ADS)

    Papior, Nick R.; Calogero, Gaetano; Brandbyge, Mads

    2018-06-01

    We present a simple way to describe the lowest unoccupied diffuse states in carbon nanostructures in density functional theory calculations using a minimal LCAO (linear combination of atomic orbitals) basis set. By comparing plane wave basis calculations, we show how these states can be captured by adding long-range orbitals to the standard LCAO basis sets for the extreme cases of planar sp 2 (graphene) and curved carbon (C60). In particular, using Bessel functions with a long range as additional basis functions retain a minimal basis size. This provides a smaller and simpler atom-centered basis set compared to the standard pseudo-atomic orbitals (PAOs) with multiple polarization orbitals or by adding non-atom-centered states to the basis.

  20. User’s guide for the Delaware River Basin Streamflow Estimator Tool (DRB-SET)

    USGS Publications Warehouse

    Stuckey, Marla H.; Ulrich, James E.

    2016-06-09

    IntroductionThe Delaware River Basin Streamflow Estimator Tool (DRB-SET) is a tool for the simulation of streamflow at a daily time step for an ungaged stream location in the Delaware River Basin. DRB-SET was developed by the U.S. Geological Survey (USGS) and funded through WaterSMART as part of the National Water Census, a USGS research program on national water availability and use that develops new water accounting tools and assesses water availability at the regional and national scales. DRB-SET relates probability exceedances at a gaged location to those at an ungaged stream location. Once the ungaged stream location has been identified by the user, an appropriate streamgage is automatically selected in DRB-SET using streamflow correlation (map correlation method). Alternately, the user can manually select a different streamgage or use the closest streamgage. A report file is generated documenting the reference streamgage and ungaged stream location information, basin characteristics, any warnings, baseline (minimally altered) and altered (affected by regulation, diversion, mining, or other anthropogenic activities) daily mean streamflow, and the mean and median streamflow. The estimated daily flows for the ungaged stream location can be easily exported as a text file that can be used as input into a statistical software package to determine additional streamflow statistics, such as flow duration exceedance or streamflow frequency statistics.

  1. A promising limited angular computed tomography reconstruction via segmentation based regional enhancement and total variation minimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Wenkun; Zhang, Hanming; Li, Lei

    2016-08-15

    X-ray computed tomography (CT) is a powerful and common inspection technique used for the industrial non-destructive testing. However, large-sized and heavily absorbing objects cause the formation of artifacts because of either the lack of specimen penetration in specific directions or the acquisition of data from only a limited angular range of views. Although the sparse optimization-based methods, such as the total variation (TV) minimization method, can suppress artifacts to some extent, reconstructing the images such that they converge to accurate values remains difficult because of the deficiency in continuous angular data and inconsistency in the projections. To address this problem,more » we use the idea of regional enhancement of the true values and suppression of the illusory artifacts outside the region to develop an efficient iterative algorithm. This algorithm is based on the combination of regional enhancement of the true values and TV minimization for the limited angular reconstruction. In this algorithm, the segmentation approach is introduced to distinguish the regions of different image knowledge and generate the support mask of the image. A new regularization term, which contains the support knowledge to enhance the true values of the image, is incorporated into the objective function. Then, the proposed optimization model is solved by variable splitting and the alternating direction method efficiently. A compensation approach is also designed to extract useful information from the initial projections and thus reduce false segmentation result and correct the segmentation support and the segmented image. The results obtained from comparing both simulation studies and real CT data set reconstructions indicate that the proposed algorithm generates a more accurate image than do the other reconstruction methods. The experimental results show that this algorithm can produce high-quality reconstructed images for the limited angular reconstruction and suppress the illusory artifacts caused by the deficiency in valid data.« less

  2. A promising limited angular computed tomography reconstruction via segmentation based regional enhancement and total variation minimization

    NASA Astrophysics Data System (ADS)

    Zhang, Wenkun; Zhang, Hanming; Li, Lei; Wang, Linyuan; Cai, Ailong; Li, Zhongguo; Yan, Bin

    2016-08-01

    X-ray computed tomography (CT) is a powerful and common inspection technique used for the industrial non-destructive testing. However, large-sized and heavily absorbing objects cause the formation of artifacts because of either the lack of specimen penetration in specific directions or the acquisition of data from only a limited angular range of views. Although the sparse optimization-based methods, such as the total variation (TV) minimization method, can suppress artifacts to some extent, reconstructing the images such that they converge to accurate values remains difficult because of the deficiency in continuous angular data and inconsistency in the projections. To address this problem, we use the idea of regional enhancement of the true values and suppression of the illusory artifacts outside the region to develop an efficient iterative algorithm. This algorithm is based on the combination of regional enhancement of the true values and TV minimization for the limited angular reconstruction. In this algorithm, the segmentation approach is introduced to distinguish the regions of different image knowledge and generate the support mask of the image. A new regularization term, which contains the support knowledge to enhance the true values of the image, is incorporated into the objective function. Then, the proposed optimization model is solved by variable splitting and the alternating direction method efficiently. A compensation approach is also designed to extract useful information from the initial projections and thus reduce false segmentation result and correct the segmentation support and the segmented image. The results obtained from comparing both simulation studies and real CT data set reconstructions indicate that the proposed algorithm generates a more accurate image than do the other reconstruction methods. The experimental results show that this algorithm can produce high-quality reconstructed images for the limited angular reconstruction and suppress the illusory artifacts caused by the deficiency in valid data.

  3. Quantitative Metabolome Analysis Based on Chromatographic Peak Reconstruction in Chemical Isotope Labeling Liquid Chromatography Mass Spectrometry.

    PubMed

    Huan, Tao; Li, Liang

    2015-07-21

    Generating precise and accurate quantitative information on metabolomic changes in comparative samples is important for metabolomics research where technical variations in the metabolomic data should be minimized in order to reveal biological changes. We report a method and software program, IsoMS-Quant, for extracting quantitative information from a metabolomic data set generated by chemical isotope labeling (CIL) liquid chromatography mass spectrometry (LC-MS). Unlike previous work of relying on mass spectral peak ratio of the highest intensity peak pair to measure relative quantity difference of a differentially labeled metabolite, this new program reconstructs the chromatographic peaks of the light- and heavy-labeled metabolite pair and then calculates the ratio of their peak areas to represent the relative concentration difference in two comparative samples. Using chromatographic peaks to perform relative quantification is shown to be more precise and accurate. IsoMS-Quant is integrated with IsoMS for picking peak pairs and Zero-fill for retrieving missing peak pairs in the initial peak pairs table generated by IsoMS to form a complete tool for processing CIL LC-MS data. This program can be freely downloaded from the www.MyCompoundID.org web site for noncommercial use.

  4. Next generation sequencing applications for breast cancer research

    PubMed Central

    PETRIC, ROXANA COJOCNEANU; POP, LAURA-ANCUTA; JURJ, ANCUTA; RADULY, LAJOS; DUMITRASCU, DAN; DRAGOS, NICOLAE; NEAGOE, IOANA BERINDAN

    2015-01-01

    For some time, cancer has not been thought of as a disease, but as a multifaceted, heterogeneous complex of genotypic and phenotypic manifestations leading to tumorigenesis. Due to recent technological progress, the outcome of cancer patients can be greatly improved by introducing in clinical practice the advantages brought about by the development of next generation sequencing techniques. Biomedical suppliers have come up with various applications which medical researchers can use to characterize a patient’s disease from molecular and genetic point of view in order to provide caregivers with rapid and relevant information to guide them in choosing the most appropriate course of treatment, with maximum efficiency and minimal side effects. Breast cancer, whose incidence has risen dramatically, is a good candidate for these novel diagnosis and therapeutic approaches, particularly when referring to specific sequencing panels which are designed to detect germline or somatic mutations in genes that are involved in breast cancer tumorigenesis and progression. Benchtop next generation sequencing machines are becoming a more common presence in the clinical setting, empowering physicians to better treat their patients, by offering early diagnosis alternatives, targeted remedies, and bringing medicine a step closer to achieving its ultimate goal, personalized therapy. PMID:26609257

  5. Integration of an expert system into a user interface language demonstration

    NASA Technical Reports Server (NTRS)

    Stclair, D. C.

    1986-01-01

    The need for a User Interface Language (UIL) has been recognized by the Space Station Program Office as a necessary tool to aid in minimizing the cost of software generation by multiple users. Previous history in the Space Shuttle Program has shown that many different areas of software generation, such as operations, integration, testing, etc., have each used a different user command language although the types of operations being performed were similar in many respects. Since the Space Station represents a much more complex software task, a common user command language--a user interface language--is required to support the large spectrum of space station software developers and users. To assist in the selection of an appropriate set of definitions for a UIL, a series of demonstration programs was generated with which to test UIL concepts against specific Space Station scenarios using operators for the astronaut and scientific community. Because of the importance of expert system in the space station, it was decided that an expert system should be embedded in the UIL. This would not only provide insight into the UIL components required but would indicate the effectiveness with which an expert system could function in such an environment.

  6. On the convergence of nonconvex minimization methods for image recovery.

    PubMed

    Xiao, Jin; Ng, Michael Kwok-Po; Yang, Yu-Fei

    2015-05-01

    Nonconvex nonsmooth regularization method has been shown to be effective for restoring images with neat edges. Fast alternating minimization schemes have also been proposed and developed to solve the nonconvex nonsmooth minimization problem. The main contribution of this paper is to show the convergence of these alternating minimization schemes, based on the Kurdyka-Łojasiewicz property. In particular, we show that the iterates generated by the alternating minimization scheme, converges to a critical point of this nonconvex nonsmooth objective function. We also extend the analysis to nonconvex nonsmooth regularization model with box constraints, and obtain similar convergence results of the related minimization algorithm. Numerical examples are given to illustrate our convergence analysis.

  7. The traveling salesman problem in surgery: economy of motion for the FLS Peg Transfer task.

    PubMed

    Falcone, John L; Chen, Xiaotian; Hamad, Giselle G

    2013-05-01

    In the Peg Transfer task in the Fundamentals of Laparoscopic Surgery (FLS) curriculum, six peg objects are sequentially transferred in a bimanual fashion using laparoscopic instruments across a pegboard and back. There are over 268 trillion ways of completing this task. In the setting of many possibilities, the traveling salesman problem is one where the objective is to solve for the shortest distance traveled through a fixed number of points. The goal of this study is to apply the traveling salesman problem to find the shortest two-dimensional path length for this task. A database platform was used with permutation application output to generate all of the single-direction solutions of the FLS Peg Transfer task. A brute-force search was performed using nested Boolean operators and database equations to calculate the overall two-dimensional distances for the efficient and inefficient solutions. The solutions were found by evaluating peg object transfer distances and distances between transfers for the nondominant and dominant hands. For the 518,400 unique single-direction permutations, the mean total two-dimensional peg object travel distance was 33.3 ± 1.4 cm. The range in distances was from 30.3 to 36.5 cm. There were 1,440 (0.28 %) of 518,400 efficient solutions with the minimized peg object travel distance of 30.3 cm. There were 8 (0.0015 %) of 518,400 solutions in the final solution set that minimized the distance of peg object transfer and minimized the distance traveled between peg transfers. Peg objects moved 12.7 cm (17.4 %) less in the efficient solutions compared to the inefficient solutions. The traveling salesman problem can be applied to find efficient solutions for surgical tasks. The eight solutions to the FLS Peg Transfer task are important for any examinee taking the FLS curriculum and for certification by the American Board of Surgery.

  8. Application of Computational Fluid Dynamics to the Study of Vortex Flow Control for the Management of Inlet Distortion

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.; Gibb, James

    1992-01-01

    The present study demonstrates that the Reduced Navier-Stokes code RNS3D can be used very effectively to develop a vortex generator installation for the purpose of minimizing the engine face circumferential distortion by controlling the development of secondary flow. The computing times required are small enough that studies such as this are feasible within an analysis-design environment with all its constraints of time and costs. This research study also established the nature of the performance improvements that can be realized with vortex flow control, and suggests a set of aerodynamic properties (called observations) that can be used to arrive at a successful vortex generator installation design. The ultimate aim of this research is to manage inlet distortion by controlling secondary flow through an arrangements of vortex generators configurations tailored to the specific aerodynamic characteristics of the inlet duct. This study also indicated that scaling between flight and typical wind tunnel test conditions is possible only within a very narrow range of generator configurations close to an optimum installation. This paper also suggests a possible law that can be used to scale generator blade height for experimental testing, but further research in this area is needed before it can be effectively applied to practical problems. Lastly, this study indicated that vortex generator installation design for inlet ducts is more complex than simply satisfying the requirement of attached flow, it must satisfy the requirement of minimum engine face distortion.

  9. An Error-Entropy Minimization Algorithm for Tracking Control of Nonlinear Stochastic Systems with Non-Gaussian Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yunlong; Wang, Aiping; Guo, Lei

    This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.

  10. MIP models for connected facility location: A theoretical and computational study☆

    PubMed Central

    Gollowitzer, Stefan; Ljubić, Ivana

    2011-01-01

    This article comprises the first theoretical and computational study on mixed integer programming (MIP) models for the connected facility location problem (ConFL). ConFL combines facility location and Steiner trees: given a set of customers, a set of potential facility locations and some inter-connection nodes, ConFL searches for the minimum-cost way of assigning each customer to exactly one open facility, and connecting the open facilities via a Steiner tree. The costs needed for building the Steiner tree, facility opening costs and the assignment costs need to be minimized. We model ConFL using seven compact and three mixed integer programming formulations of exponential size. We also show how to transform ConFL into the Steiner arborescence problem. A full hierarchy between the models is provided. For two exponential size models we develop a branch-and-cut algorithm. An extensive computational study is based on two benchmark sets of randomly generated instances with up to 1300 nodes and 115,000 edges. We empirically compare the presented models with respect to the quality of obtained bounds and the corresponding running time. We report optimal values for all but 16 instances for which the obtained gaps are below 0.6%. PMID:25009366

  11. Capacitated arc routing problem and its extensions in waste collection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fadzli, Mohammad; Najwa, Nurul; Luis, Martino

    2015-05-15

    Capacitated arc routing problem (CARP) is the youngest generation of graph theory that focuses on solving the edge/arc routing for optimality. Since many years, operational research devoted to CARP counterpart, known as vehicle routing problem (VRP), which does not fit to several real cases such like waste collection problem and road maintenance. In this paper, we highlighted several extensions of capacitated arc routing problem (CARP) that represents the real-life problem of vehicle operation in waste collection. By purpose, CARP is designed to find a set of routes for vehicles that satisfies all pre-setting constraints in such that all vehicles mustmore » start and end at a depot, service a set of demands on edges (or arcs) exactly once without exceeding the capacity, thus the total fleet cost is minimized. We also addressed the differentiation between CARP and VRP in waste collection. Several issues have been discussed including stochastic demands and time window problems in order to show the complexity and importance of CARP in the related industry. A mathematical model of CARP and its new version is presented by considering several factors such like delivery cost, lateness penalty and delivery time.« less

  12. Generation of oscillating gene regulatory network motifs

    NASA Astrophysics Data System (ADS)

    van Dorp, M.; Lannoo, B.; Carlon, E.

    2013-07-01

    Using an improved version of an evolutionary algorithm originally proposed by François and Hakim [Proc. Natl. Acad. Sci. USAPNASA60027-842410.1073/pnas.0304532101 101, 580 (2004)], we generated small gene regulatory networks in which the concentration of a target protein oscillates in time. These networks may serve as candidates for oscillatory modules to be found in larger regulatory networks and protein interaction networks. The algorithm was run for 105 times to produce a large set of oscillating modules, which were systematically classified and analyzed. The robustness of the oscillations against variations of the kinetic rates was also determined, to filter out the least robust cases. Furthermore, we show that the set of evolved networks can serve as a database of models whose behavior can be compared to experimentally observed oscillations. The algorithm found three smallest (core) oscillators in which nonlinearities and number of components are minimal. Two of those are two-gene modules: the mixed feedback loop, already discussed in the literature, and an autorepressed gene coupled with a heterodimer. The third one is a single gene module which is competitively regulated by a monomer and a dimer. The evolutionary algorithm also generated larger oscillating networks, which are in part extensions of the three core modules and in part genuinely new modules. The latter includes oscillators which do not rely on feedback induced by transcription factors, but are purely of post-transcriptional type. Analysis of post-transcriptional mechanisms of oscillation may provide useful information for circadian clock research, as recent experiments showed that circadian rhythms are maintained even in the absence of transcription.

  13. Casual Set Approach to a Minimal Invariant Length

    NASA Astrophysics Data System (ADS)

    Raut, Usha

    2007-04-01

    Any attempt to quantize gravity would necessarily introduce a minimal observable length scale of the order of the Planck length. This conclusion is based on several different studies and thought experiments and appears to be an inescapable feature of all quantum gravity theories, irrespective of the method used to quantize gravity. Over the last few years there has been growing concern that such a minimal length might lead to a contradiction with the basic postulates of special relativity, in particular the Lorentz-Fitzgerald contraction. A few years ago, Rovelli et.al, attempted to reconcile an invariant minimal length with Special Relativity, using the framework of loop quantum gravity. However, the inherently canonical formalism of the loop quantum approach is plagued by a variety of problems, many brought on by separation of space and time co-ordinates. In this paper we use a completely different approach. Using the framework of the causal set paradigm, along with a statistical measure of closeness between Lorentzian manifolds, we re-examine the issue of introducing a minimal observable length that is not at odds with Special Relativity postulates.

  14. Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.

    PubMed

    Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing

    2016-01-01

    Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.

  15. How Should Genes and Taxa be Sampled for Phylogenomic Analyses with Missing Data? An Empirical Study in Iguanian Lizards.

    PubMed

    Streicher, Jeffrey W; Schulte, James A; Wiens, John J

    2016-01-01

    Targeted sequence capture is becoming a widespread tool for generating large phylogenomic data sets to address difficult phylogenetic problems. However, this methodology often generates data sets in which increasing the number of taxa and loci increases amounts of missing data. Thus, a fundamental (but still unresolved) question is whether sampling should be designed to maximize sampling of taxa or genes, or to minimize the inclusion of missing data cells. Here, we explore this question for an ancient, rapid radiation of lizards, the pleurodont iguanians. Pleurodonts include many well-known clades (e.g., anoles, basilisks, iguanas, and spiny lizards) but relationships among families have proven difficult to resolve strongly and consistently using traditional sequencing approaches. We generated up to 4921 ultraconserved elements with sampling strategies including 16, 29, and 44 taxa, from 1179 to approximately 2.4 million characters per matrix and approximately 30% to 60% total missing data. We then compared mean branch support for interfamilial relationships under these 15 different sampling strategies for both concatenated (maximum likelihood) and species tree (NJst) approaches (after showing that mean branch support appears to be related to accuracy). We found that both approaches had the highest support when including loci with up to 50% missing taxa (matrices with ~40-55% missing data overall). Thus, our results show that simply excluding all missing data may be highly problematic as the primary guiding principle for the inclusion or exclusion of taxa and genes. The optimal strategy was somewhat different for each approach, a pattern that has not been shown previously. For concatenated analyses, branch support was maximized when including many taxa (44) but fewer characters (1.1 million). For species-tree analyses, branch support was maximized with minimal taxon sampling (16) but many loci (4789 of 4921). We also show that the choice of these sampling strategies can be critically important for phylogenomic analyses, since some strategies lead to demonstrably incorrect inferences (using the same method) that have strong statistical support. Our preferred estimate provides strong support for most interfamilial relationships in this important but phylogenetically challenging group. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Testing of next-generation nonlinear calibration based non-uniformity correction techniques using SWIR devices

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna R.; Wickert, Mark A.

    2017-05-01

    A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.

  17. COMCAN: a computer program for common cause analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burdick, G.R.; Marshall, N.H.; Wilson, J.R.

    1976-05-01

    The computer program, COMCAN, searches the fault tree minimal cut sets for shared susceptibility to various secondary events (common causes) and common links between components. In the case of common causes, a location check may also be performed by COMCAN to determine whether barriers to the common cause exist between components. The program can locate common manufacturers of components having events in the same minimal cut set. A relative ranking scheme for secondary event susceptibility is included in the program.

  18. Minimizing both dropped formulas and concepts in knowledge fusion

    NASA Astrophysics Data System (ADS)

    Grégoire, Éric

    2006-04-01

    In this paper, a new family of approaches to fuse inconsistent knowledge sources is introduced in a standard logical setting. They combine two preference criteria to arbitrate between conflicting information: the minimization of falsified formulas and the minimization of the number of the different atoms that are involved in those formulas. Although these criteria exhibit a syntactical flavor, the approaches are semantically-defined.

  19. Minimal Reduplication

    ERIC Educational Resources Information Center

    Kirchner, Jesse Saba

    2010-01-01

    This dissertation introduces Minimal Reduplication, a new theory and framework within generative grammar for analyzing reduplication in human language. I argue that reduplication is an emergent property in multiple components of the grammar. In particular, reduplication occurs independently in the phonology and syntax components, and in both cases…

  20. 2016 American College of Rheumatology/European League Against Rheumatism Criteria for Minimal, Moderate, and Major Clinical Response in Adult Dermatomyositis and Polymyositis: An International Myositis Assessment and Clinical Studies Group/Paediatric Rheumatology International Trials Organisation Collaborative Initiative.

    PubMed

    Aggarwal, Rohit; Rider, Lisa G; Ruperto, Nicolino; Bayat, Nastaran; Erman, Brian; Feldman, Brian M; Oddis, Chester V; Amato, Anthony A; Chinoy, Hector; Cooper, Robert G; Dastmalchi, Maryam; Fiorentino, David; Isenberg, David; Katz, James D; Mammen, Andrew; de Visser, Marianne; Ytterberg, Steven R; Lundberg, Ingrid E; Chung, Lorinda; Danko, Katalin; García-De la Torre, Ignacio; Song, Yeong Wook; Villa, Luca; Rinaldi, Mariangela; Rockette, Howard; Lachenbruch, Peter A; Miller, Frederick W; Vencovsky, Jiri

    2017-05-01

    To develop response criteria for adult dermatomyositis (DM) and polymyositis (PM). Expert surveys, logistic regression, and conjoint analysis were used to develop 287 definitions using core set measures. Myositis experts rated greater improvement among multiple pairwise scenarios in conjoint analysis surveys, where different levels of improvement in 2 core set measures were presented. The PAPRIKA (Potentially All Pairwise Rankings of All Possible Alternatives) method determined the relative weights of core set measures and conjoint analysis definitions. The performance characteristics of the definitions were evaluated on patient profiles using expert consensus (gold standard) and were validated using data from a clinical trial. The nominal group technique was used to reach consensus. Consensus was reached for a conjoint analysis-based continuous model using absolute percent change in core set measures (physician, patient, and extramuscular global activity, muscle strength, Health Assessment Questionnaire, and muscle enzyme levels). A total improvement score (range 0-100), determined by summing scores for each core set measure, was based on improvement in and relative weight of each core set measure. Thresholds for minimal, moderate, and major improvement were ≥20, ≥40, and ≥60 points in the total improvement score. The same criteria were chosen for juvenile DM, with different improvement thresholds. Sensitivity and specificity in DM/PM patient cohorts were 85% and 92%, 90% and 96%, and 92% and 98% for minimal, moderate, and major improvement, respectively. Definitions were validated in the clinical trial analysis for differentiating the physician rating of improvement (P < 0.001). The response criteria for adult DM/PM consisted of the conjoint analysis model based on absolute percent change in 6 core set measures, with thresholds for minimal, moderate, and major improvement. © 2017, American College of Rheumatology.

  1. Implications of Minimizing Trauma During Conventional Cochlear Implantation

    PubMed Central

    Carlson, Matthew L.; Driscoll, Colin L. W.; Gifford, René H.; Service, Geoffrey J.; Tombers, Nicole M.; Hughes-Borst, Becky J.; Neff, Brian A.; Beatty, Charles W.

    2014-01-01

    Objective To describe the relationship between implantation-associated trauma and postoperative speech perception scores among adult and pediatric patients undergoing cochlear implantation using conventional length electrodes and minimally traumatic surgical techniques. Study Design Retrospective chart review (2002–2010). Setting Tertiary academic referral center. Patients All subjects with significant preoperative low-frequency hearing (≤70 dB HL at 250 Hz) who underwent cochlear implantation with a newer generation implant electrode (Nucleus Contour Advance, Advanced Bionics HR90K [1J and Helix], and Med El Sonata standard H array) were reviewed. Intervention(s) Preimplant and postimplant audiometric thresholds and speech recognition scores were recorded using the electronic medical record. Main Outcome Measure(s) Postimplantation pure tone threshold shifts were used as a surrogate measure for extent of intracochlear injury and correlated with postoperative speech perception scores. Results Between 2002 and 2010, 703 cochlear implant (CI) operations were performed. Data from 126 implants were included in the analysis. The mean preoperative low-frequency pure-tone average was 55.4 dB HL. Hearing preservation was observed in 55% of patients. Patients with hearing preservation were found to have significantly higher postoperative speech perception performance in the cochlear implantation-only condition than those who lost all residual hearing. Conclusion Conservation of acoustic hearing after conventional length cochlear implantation is unpredictable but remains a realistic goal. The combination of improved technology and refined surgical technique may allow for conservation of some residual hearing in more than 50% of patients. Germane to the conventional length CI recipient with substantial hearing loss, minimizing trauma allows for improved speech perception in the electric condition. These findings support the use of minimally traumatic techniques in all CI recipients, even those destined for electric-only stimulation. PMID:21659922

  2. Second-generation endometrial ablation technologies: the hot liquid balloons.

    PubMed

    Vilos, George A; Edris, Fawaz

    2007-12-01

    Hysteroscopic endometrial ablation (HEA) was introduced in the 1980s to treat menorrhagia. Its use required additional training, surgical expertise and specialized equipment to minimize emergent complications such as uterine perforations, thermal injuries and excessive fluid absorption. To overcome these difficulties and concerns, thermal balloon endometrial ablation (TBEA) was introduced in the 1990s. Four hot liquid balloons have been introduced into clinical practice. All systems consist of a catheter (4-10mm diameter), a silicone balloon and a control unit. Liquids used to inflate the balloons include internally heated dextrose in water (ThermaChoice, 87 degrees C), and externally heated glycine (Cavaterm, 78 degrees C), saline (Menotreat, 85 degrees ) and glycerine (Thermablate, 173 degrees C). All balloons require pressurization from 160 to 240 mmHg for treatment cycles of 2 to 10 minutes. Prior to TBEA, preoperative endometrial thinning, including suction curettage, is optional. Several RCTs and cohort studies indicate that the advantages of TBEA include portability, ease of use and short learning curve. In addition, small diameter catheters requiring minimal cervical dilatation (5-7 mm) and short duration of treatment cycles (2-8 min) allow treatment under minimal analgesia/anesthesia requirements in a clinic setting. Following TBEA serious adverse events, including thermal injuries to viscera have been experienced. To minimize such injuries some surgeons advocate the use of routine post-dilatation hysteroscopy and/or ultrasonography to confirm correct intrauterine placement of the balloon prior to initiating the treatment cycle. After 10 years of clinical practice, TBEA is thought to be the preferred first-line surgical treatment of menorrhagia in appropriately selected candidates. Economic modeling also suggested that TBEA may be more cost-effective than HEA.

  3. Radial Symmetry of p-Harmonic Minimizers

    NASA Astrophysics Data System (ADS)

    Koski, Aleksis; Onninen, Jani

    2018-03-01

    "It is still not known if the radial cavitating minimizers obtained by uc(Ball) (Philos Trans R Soc Lond A 306:557-611, 1982) (and subsequently by many others) are global minimizers of any physically reasonable nonlinearly elastic energy". This quotation is from uc(Sivaloganathan) and uc(Spector) (Ann Inst Henri Poincaré Anal Non Linéaire 25(1):201-213, 2008) and seems to be still accurate. The model case of the p-harmonic energy is considered here. We prove that the planar radial minimizers are indeed the global minimizers provided we prescribe the admissible deformations on the boundary. In the traction free setting, however, even the identity map need not be a global minimizer.

  4. Minimal two-sphere model of the generation of fluid flow at low Reynolds numbers.

    PubMed

    Leoni, M; Bassetti, B; Kotar, J; Cicuta, P; Cosentino Lagomarsino, M

    2010-03-01

    Locomotion and generation of flow at low Reynolds number are subject to severe limitations due to the irrelevance of inertia: the "scallop theorem" requires that the system have at least two degrees of freedom, which move in non-reciprocal fashion, i.e. breaking time-reversal symmetry. We show here that a minimal model consisting of just two spheres driven by harmonic potentials is capable of generating flow. In this pump system the two degrees of freedom are the mean and relative positions of the two spheres. We have performed and compared analytical predictions, numerical simulation and experiments, showing that a time-reversible drive is sufficient to induce flow.

  5. Generation of a widely spaced optical frequency comb using an amplitude modulator pair

    NASA Astrophysics Data System (ADS)

    Gunning, Fatima C. G.; Ellis, Andrew D.

    2005-06-01

    Multi-wavelength sources are required for wavelength division multiplexed (WDM) optical communication systems, and typically a bank of DFB lasers is used. However, large costs are involved to provide wavelength selected sources and high precision wavelength lockers. Optical comb generation is attractive solution, minimizing the component count and improving wavelength stability. In addition, comb generation offers the potential for new WDM architectures, such as coherent WDM, as it preserves the phase relation between the generated channels. Complex comb generation systems have been introduced in the past, using fibre ring lasers [1] or non-linear effects within long fibres [2]. More recently, simpler set-ups were proposed, including hybrid amplitude-phase modulation schemes [3-5]. However, the narrow line spacing of these systems, typically 17 GHz, restricts their use to bit rates up to 10 Gbit/s. In this paper, we propose and demonstrate a simple method of comb generation that is suitable for bit rates up to 42.667 Gbit/s. The comb generator was composed of two Mach-Zehnder modulators (MZM) in series, each being driven with a sinusoidal wave at 42.667 GHz with a well-defined phase relationship. As a result, 7 comb lines separated by 42.667 GHz were generated from a single source, when amplitude up to 2.2 Vp was applied to the modulators, giving flatness better than 1 dB. By passively multiplexing 8 source lasers with the comb generator and minimising inter-modulator dispersion, it was possible to achieve a multi-wavelength transmitter with 56 channels, with flatness better than 1.2 dB across 20 nm (2.4 THz).

  6. Laser beam generating apparatus

    DOEpatents

    Warner, Bruce E.; Duncan, David B.

    1993-01-01

    Laser beam generating apparatus including a septum segment disposed longitudinally within the tubular structure of the apparatus. The septum provides for radiatively dissipating heat buildup within the tubular structure and for generating relatively uniform laser beam pulses so as to minimize or eliminate radial pulse delays (the chevron effect).

  7. Laser beam generating apparatus

    DOEpatents

    Warner, Bruce E.; Duncan, David B.

    1994-01-01

    Laser beam generating apparatus including a septum segment disposed longitudinally within the tubular structure of the apparatus. The septum provides for radiatively dissipating heat buildup within the tubular structure and for generating relatively uniform laser beam pulses so as to minimize or eliminate radial pulse delays (the chevron effect).

  8. Suction forces generated by passive bile bag drainage on a model of post-subdural hematoma evacuation.

    PubMed

    Tenny, Steven O; Thorell, William E

    2018-05-05

    Passive drainage systems are commonly used after subdural hematoma evacuation but there is a dearth of published data regarding the suction forces created. We set out to quantify the suction forces generated by a passive drainage system. We created a model of passive drainage after subdural hematoma evacuation. We measured the maximum suction force generated with a bile bag drain for both empty drain tubing and fluid-filled drain tube causing a siphoning effect. We took measurements at varying heights of the bile bag to analyze if bile bag height changed suction forces generated. An empty bile bag with no fluid in the drainage tube connected to a rigid, fluid-filled model creates minimal suction force of 0.9 mmHg (95% CI 0.64-1.16 mmHg). When fluid fills the drain tubing, a siphoning effect is created and can generate suction forces ranging from 18.7 to 30.6 mmHg depending on the relative position of the bile bag and filled amount of the bile bag. The suction forces generated are statistically different if the bile bag is 50 cm below, level with or 50 cm above the experimental model. Passive bile bag drainage does not generate significant suction on a fluid-filled rigid model if the drain tubing is empty. If fluid fills the drain tubing then siphoning occurs and can increase the suction force of a passive bile bag drainage system to levels comparable to partially filled Jackson-Pratt bulb drainage.

  9. INTELLIGENT DECISION SUPPORT FOR WASTE MINIMIZATION IN ELECTROPLATING PLANTS. (R824732)

    EPA Science Inventory

    Abstract

    Wastewater, spent solvent, spent process solutions, and sludge are the major waste streams generated in large volumes daily in electroplating plants. These waste streams can be significantly minimized through process modification and operational improvement. I...

  10. Development of a new procedure for the determination of captopril in pharmaceutical formulations employing chemiluminescence and a multicommuted flow analysis approach.

    PubMed

    Lima, Manoel J A; Fernandes, Ridvan N; Tanaka, Auro A; Reis, Boaventura F

    2016-02-01

    This paper describes a new technique for the determination of captopril in pharmaceutical formulations, implemented by employing multicommuted flow analysis. The analytical procedure was based on the reaction between hypochlorite and captopril. The remaining hypochlorite oxidized luminol that generated electromagnetic radiation detected using a homemade luminometer. To the best of our knowledge, this is the first time that this reaction has been exploited for the determination of captopril in pharmaceutical products, offering a clean analytical procedure with minimal reagent usage. The effectiveness of the proposed procedure was confirmed by analyzing a set of pharmaceutical formulations. Application of the paired t-test showed that there was no significant difference between the data sets at a 95% confidence level. The useful features of the new analytical procedure included a linear response for captopril concentrations in the range 20.0-150.0 µmol/L (r = 0.997), a limit of detection (3σ) of 2.0 µmol/L, a sample throughput of 164 determinations per hour, reagent consumption of 9 µg luminol and 42 µg hypochlorite per determination and generation of 0.63 mL of waste. A relative standard deviation of 1% (n = 6) for a standard solution containing 80 µmol/L captopril was also obtained. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Robustly Aligning a Shape Model and Its Application to Car Alignment of Unknown Pose.

    PubMed

    Li, Yan; Gu, Leon; Kanade, Takeo

    2011-09-01

    Precisely localizing in an image a set of feature points that form a shape of an object, such as car or face, is called alignment. Previous shape alignment methods attempted to fit a whole shape model to the observed data, based on the assumption of Gaussian observation noise and the associated regularization process. However, such an approach, though able to deal with Gaussian noise in feature detection, turns out not to be robust or precise because it is vulnerable to gross feature detection errors or outliers resulting from partial occlusions or spurious features from the background or neighboring objects. We address this problem by adopting a randomized hypothesis-and-test approach. First, a Bayesian inference algorithm is developed to generate a shape-and-pose hypothesis of the object from a partial shape or a subset of feature points. For alignment, a large number of hypotheses are generated by randomly sampling subsets of feature points, and then evaluated to find the one that minimizes the shape prediction error. This method of randomized subset-based matching can effectively handle outliers and recover the correct object shape. We apply this approach on a challenging data set of over 5,000 different-posed car images, spanning a wide variety of car types, lighting, background scenes, and partial occlusions. Experimental results demonstrate favorable improvements over previous methods on both accuracy and robustness.

  12. Progress in the development of paper-based diagnostics for low-resource point-of-care settings

    PubMed Central

    Byrnes, Samantha; Thiessen, Gregory; Fu, Elain

    2014-01-01

    This Review focuses on recent work in the field of paper microfluidics that specifically addresses the goal of translating the multistep processes that are characteristic of gold-standard laboratory tests to low-resource point-of-care settings. A major challenge is to implement multistep processes with the robust fluid control required to achieve the necessary sensitivity and specificity of a given application in a user-friendly package that minimizes equipment. We review key work in the areas of fluidic controls for automation in paper-based devices, readout methods that minimize dedicated equipment, and power and heating methods that are compatible with low-resource point-of-care settings. We also highlight a focused set of recent applications and discuss future challenges. PMID:24256361

  13. EDMUS, a European database for multiple sclerosis.

    PubMed

    Confavreux, C; Compston, D A; Hommes, O R; McDonald, W I; Thompson, A J

    1992-08-01

    EDMUS is a minimal descriptive record developed for research purposes to document clinical and laboratory data in patients with multiple sclerosis (MS). It has been designed by a committee of the European Concerted Action for MS, organised under the auspices of the Commission of the European Communities. The software is user-friendly and fast, with a minimal set of obligatory data. Priority has been given to analytical data and the system is capable of automatically generating data, such as diagnosis classification, using appropriate algorithms. This procedure saves time, ensures a uniform approach to individual cases and allows automatic updating of the classification whenever additional information becomes available. It is also compatible with future developments and requirements since new algorithms can be entered in the programme when necessary. This system is flexible and may be adapted to the users needs. It is run on Apple and IBM-PC personal microcomputers. Great care has been taken to preserve confidentiality of the data. It is anticipated that this "common" language will enable the collection of appropriate cases for specific purposes, including population-based studies of MS and will be particularly useful in projects where the collaboration of several centres is needed to recruit a critical number of patients.

  14. Effect of the chlorinated washing of minimally processed vegetables on the generation of haloacetic acids.

    PubMed

    Cardador, Maria Jose; Gallego, Mercedes

    2012-07-25

    Chlorine solutions are usually used to sanitize fruit and vegetables in the fresh-cut industry due to their efficacy, low cost, and simple use. However, disinfection byproducts such as haloacetic acids (HAAs) can be formed during this process, which can remain on minimally processed vegetables (MPVs). These compounds are toxic and/or carcinogenic and have been associated with human health risks; therefore, the U.S. Environmental Protection Agency has set a maximum contaminant level for five HAAs at 60 μg/L in drinking water. This paper describes the first method to determine the nine HAAs that can be present in MPV samples, with static headspace coupled with gas chromatography-mass spectrometry where the leaching and derivatization of the HAAs are carried out in a single step. The proposed method is sensitive, with limits of detection between 0.1 and 2.4 μg/kg and an average relative standard deviation of ∼8%. From the samples analyzed, we can conclude that about 23% of them contain at least two HAAs (<0.4-24 μg/kg), which showed that these compounds are formed during washing and then remain on the final product.

  15. BFEE: A User-Friendly Graphical Interface Facilitating Absolute Binding Free-Energy Calculations.

    PubMed

    Fu, Haohao; Gumbart, James C; Chen, Haochuan; Shao, Xueguang; Cai, Wensheng; Chipot, Christophe

    2018-03-26

    Quantifying protein-ligand binding has attracted the attention of both theorists and experimentalists for decades. Many methods for estimating binding free energies in silico have been reported in recent years. Proper use of the proposed strategies requires, however, adequate knowledge of the protein-ligand complex, the mathematical background for deriving the underlying theory, and time for setting up the simulations, bookkeeping, and postprocessing. Here, to minimize human intervention, we propose a toolkit aimed at facilitating the accurate estimation of standard binding free energies using a geometrical route, coined the binding free-energy estimator (BFEE), and introduced it as a plug-in of the popular visualization program VMD. Benefitting from recent developments in new collective variables, BFEE can be used to generate the simulation input files, based solely on the structure of the complex. Once the simulations are completed, BFEE can also be utilized to perform the post-treatment of the free-energy calculations, allowing the absolute binding free energy to be estimated directly from the one-dimensional potentials of mean force in simulation outputs. The minimal amount of human intervention required during the whole process combined with the ergonomic graphical interface makes BFEE a very effective and practical tool for the end-user.

  16. Coarse-graining errors and numerical optimization using a relative entropy framework.

    PubMed

    Chaimovich, Aviel; Shell, M Scott

    2011-03-07

    The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, S(rel), that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework. © 2011 American Institute of Physics.

  17. Multimodal Registration of White Matter Brain Data via Optimal Mass Transport.

    PubMed

    Rehman, Tauseefur; Haber, Eldad; Pohl, Kilian M; Haker, Steven; Halle, Mike; Talos, Florin; Wald, Lawrence L; Kikinis, Ron; Tannenbaum, Allen

    2008-09-01

    The elastic registration of medical scans from different acquisition sequences is becoming an important topic for many research labs that would like to continue the post-processing of medical scans acquired via the new generation of high-field-strength scanners. In this note, we present a parameter-free registration algorithm that is well suited for this scenario as it requires no tuning to specific acquisition sequences. The algorithm encompasses a new numerical scheme for computing elastic registration maps based on the minimizing flow approach to optimal mass transport. The approach utilizes all of the gray-scale data in both images, and the optimal mapping from image A to image B is the inverse of the optimal mapping from B to A . Further, no landmarks need to be specified, and the minimizer of the distance functional involved is unique. We apply the algorithm to register the white matter folds of two different scans and use the results to parcellate the cortex of the target image. To the best of our knowledge, this is the first time that the optimal mass transport function has been applied to register large 3D multimodal data sets.

  18. Multimodal Registration of White Matter Brain Data via Optimal Mass Transport

    PubMed Central

    Rehman, Tauseefur; Haber, Eldad; Pohl, Kilian M.; Haker, Steven; Halle, Mike; Talos, Florin; Wald, Lawrence L.; Kikinis, Ron; Tannenbaum, Allen

    2017-01-01

    The elastic registration of medical scans from different acquisition sequences is becoming an important topic for many research labs that would like to continue the post-processing of medical scans acquired via the new generation of high-field-strength scanners. In this note, we present a parameter-free registration algorithm that is well suited for this scenario as it requires no tuning to specific acquisition sequences. The algorithm encompasses a new numerical scheme for computing elastic registration maps based on the minimizing flow approach to optimal mass transport. The approach utilizes all of the gray-scale data in both images, and the optimal mapping from image A to image B is the inverse of the optimal mapping from B to A. Further, no landmarks need to be specified, and the minimizer of the distance functional involved is unique. We apply the algorithm to register the white matter folds of two different scans and use the results to parcellate the cortex of the target image. To the best of our knowledge, this is the first time that the optimal mass transport function has been applied to register large 3D multimodal data sets. PMID:28626844

  19. Trial Watch

    PubMed Central

    Galluzzi, Lorenzo; Vacchelli, Erika; Eggermont, Alexander; Fridman, Wolf Herve´; Galon, Jerome; Sautès-Fridman, Catherine; Tartour, Eric; Zitvogel, Laurence; Kroemer, Guido

    2012-01-01

    During the last two decades, several approaches for the activation of the immune system against cancer have been developed. These include rather unselective maneuvers such as the systemic administration of immunostimulatory agents (e.g., interleukin-2) as well as targeted interventions, encompassing highly specific monoclonal antibodies, vaccines and cell-based therapies. Among the latter, adoptive cell transfer (ACT) involves the selection of autologous lymphocytes with antitumor activity, their expansion/activation ex vivo, and their reinfusion into the patient, often in the context of lymphodepleting regimens (to minimize endogenous immunosuppression). Such autologous cells can be isolated from tumor-infiltrating lymphocytes or generated by manipulating circulating lymphocytes for the expression of tumor-specific T-cell receptors. In addition, autologous lymphocytes can be genetically engineered to prolong their in vivo persistence, to boost antitumor responses and/or to minimize side effects. ACT has recently been shown to be associated with a consistent rate of durable regressions in melanoma and renal cell carcinoma patients and holds great promises in several other oncological settings. In this Trial Watch, we will briefly review the scientific rationale behind ACT and discuss the progress of recent clinical trials evaluating the safety and effectiveness of adoptive cell transfer as an anticancer therapy. PMID:22737606

  20. Collision-avoidance behaviors of minimally restrained flying locusts to looming stimuli

    PubMed Central

    Chan, R. WM.; Gabbiani, F.

    2013-01-01

    SUMMARY Visually guided collision avoidance is of paramount importance in flight, for instance to allow escape from potential predators. Yet, little is known about the types of collision-avoidance behaviors that may be generated by flying animals in response to an impending visual threat. We studied the behavior of minimally restrained locusts flying in a wind tunnel as they were subjected to looming stimuli presented to the side of the animal, simulating the approach of an object on a collision course. Using high-speed movie recordings, we observed a wide variety of collision-avoidance behaviors including climbs and dives away from – but also towards – the stimulus. In a more restrained setting, we were able to relate kinematic parameters of the flapping wings with yaw changes in the trajectory of the animal. Asymmetric wing flapping was most strongly correlated with changes in yaw, but we also observed a substantial effect of wing deformations. Additionally, the effect of wing deformations on yaw was relatively independent of that of wing asymmetries. Thus, flying locusts exhibit a rich range of collision-avoidance behaviors that depend on several distinct aerodynamic characteristics of wing flapping flight. PMID:23364572

  1. Augmented reality in the surgery of cerebral aneurysms: a technical report.

    PubMed

    Cabrilo, Ivan; Bijlenga, Philippe; Schaller, Karl

    2014-06-01

    Augmented reality is the overlay of computer-generated images on real-world structures. It has previously been used for image guidance during surgical procedures, but it has never been used in the surgery of cerebral aneurysms. To report our experience of cerebral aneurysm surgery aided by augmented reality. Twenty-eight patients with 39 unruptured aneurysms were operated on in a prospective manner with augmented reality. Preoperative 3-dimensional image data sets (angio-magnetic resonance imaging, angio-computed tomography, and 3-dimensional digital subtraction angiography) were used to create virtual segmentations of patients' vessels, aneurysms, aneurysm necks, skulls, and heads. These images were injected intraoperatively into the eyepiece of the operating microscope. An example case of an unruptured posterior communicating artery aneurysm clipping is illustrated in a video. The described operating procedure allowed continuous monitoring of the accuracy of patient registration with neuronavigation data and assisted in the performance of tailored surgical approaches and optimal clipping with minimized exposition. Augmented reality may add to the performance of a minimally invasive approach, although further studies need to be performed to evaluate whether certain groups of aneurysms are more likely to benefit from it. Further technological development is required to improve its user friendliness.

  2. Optimal Inversion Parameters for Full Waveform Inversion using OBS Data Set

    NASA Astrophysics Data System (ADS)

    Kim, S.; Chung, W.; Shin, S.; Kim, D.; Lee, D.

    2017-12-01

    In recent years, full Waveform Inversion (FWI) has been the most researched technique in seismic data processing. It uses the residuals between observed and modeled data as an objective function; thereafter, the final subsurface velocity model is generated through a series of iterations meant to minimize the residuals.Research on FWI has expanded from acoustic media to elastic media. In acoustic media, the subsurface property is defined by P-velocity; however, in elastic media, properties are defined by multiple parameters, such as P-velocity, S-velocity, and density. Further, the elastic media can also be defined by Lamé constants, density or impedance PI, SI; consequently, research is being carried out to ascertain the optimal parameters.From results of advanced exploration equipment and Ocean Bottom Seismic (OBS) survey, it is now possible to obtain multi-component seismic data. However, to perform FWI on these data and generate an accurate subsurface model, it is important to determine optimal inversion parameters among (Vp, Vs, ρ), (λ, μ, ρ), and (PI, SI) in elastic media. In this study, staggered grid finite difference method was applied to simulate OBS survey. As in inversion, l2-norm was set as objective function. Further, the accurate computation of gradient direction was performed using the back-propagation technique and its scaling was done using the Pseudo-hessian matrix.In acoustic media, only Vp is used as the inversion parameter. In contrast, various sets of parameters, such as (Vp, Vs, ρ) and (λ, μ, ρ) can be used to define inversion in elastic media. Therefore, it is important to ascertain the parameter that gives the most accurate result for inversion with OBS data set.In this study, we generated Vp and Vs subsurface models by using (λ, μ, ρ) and (Vp, Vs, ρ) as inversion parameters in every iteration, and compared the final two FWI results.This research was supported by the Basic Research Project(17-3312) of the Korea Institute of Geoscience and Mineral Resources(KIGAM) funded by the Ministry of Science, ICT and Future Planning of Korea.

  3. Constrained minimization of smooth functions using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Moerder, Daniel D.; Pamadi, Bandu N.

    1994-01-01

    The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.

  4. Thermal performance of plate fin heat sink cooled by air slot impinging jet with different cross-sectional area

    NASA Astrophysics Data System (ADS)

    Mesalhy, O. M.; El-Sayed, Mostafa M.

    2015-06-01

    Flow and heat transfer characteristics of a plate-fin heat sink cooled by a rectangular impinging jet with different cross-sectional area were studied experimentally and numerically. The study concentrated on investigating the effect of jet width, fin numbers, and fin heights on thermal performance. Entropy generation minimization method was used to define the optimum design and operating conditions. It is found that, the jet width that minimizes entropy generation changes with heat sink height and fin numbers.

  5. Pectoral Fascial (PECS) I and II Blocks as Rescue Analgesia in a Patient Undergoing Minimally Invasive Cardiac Surgery.

    PubMed

    Yalamuri, Suraj; Klinger, Rebecca Y; Bullock, W Michael; Glower, Donald D; Bottiger, Brandi A; Gadsden, Jeffrey C

    Patients undergoing minimally invasive cardiac surgery have the potential for significant pain from the thoracotomy site. We report the successful use of pectoral nerve block types I and II (Pecs I and II) as rescue analgesia in a patient undergoing minimally invasive mitral valve repair. In this case, a 78-year-old man, with no history of chronic pain, underwent mitral valve repair via right anterior thoracotomy for severe mitral regurgitation. After extubation, he complained of 10/10 pain at the incision site that was minimally responsive to intravenous opioids. He required supplemental oxygen because of poor pulmonary mechanics, with shallow breathing and splinting due to pain, and subsequent intensive care unit readmission. Ultrasound-guided Pecs I and II blocks were performed on the right side with 30 mL of 0.2% ropivacaine with 1:400,000 epinephrine. The blocks resulted in near-complete chest wall analgesia and improved pulmonary mechanics for approximately 24 hours. After the single-injection blocks regressed, a second set of blocks was performed with 266 mg of liposomal bupivacaine mixed with bupivacaine. This second set of blocks provided extended analgesia for an additional 48 hours. The patient was weaned rapidly from supplemental oxygen after the blocks because of improved analgesia. Pectoral nerve blocks have been described in the setting of breast surgery to provide chest wall analgesia. We report the first successful use of Pecs blocks to provide effective chest wall analgesia for a patient undergoing minimally invasive cardiac surgery with thoracotomy. We believe that these blocks may provide an important nonopioid option for the management of pain during recovery from minimally invasive cardiac surgery.

  6. Hierarchical Control Using Networks Trained with Higher-Level Forward Models

    PubMed Central

    Wayne, Greg; Abbott, L.F.

    2015-01-01

    We propose and develop a hierarchical approach to network control of complex tasks. In this approach, a low-level controller directs the activity of a “plant,” the system that performs the task. However, the low-level controller may only be able to solve fairly simple problems involving the plant. To accomplish more complex tasks, we introduce a higher-level controller that controls the lower-level controller. We use this system to direct an articulated truck to a specified location through an environment filled with static or moving obstacles. The final system consists of networks that have memorized associations between the sensory data they receive and the commands they issue. These networks are trained on a set of optimal associations that are generated by minimizing cost functions. Cost function minimization requires predicting the consequences of sequences of commands, which is achieved by constructing forward models, including a model of the lower-level controller. The forward models and cost minimization are only used during training, allowing the trained networks to respond rapidly. In general, the hierarchical approach can be extended to larger numbers of levels, dividing complex tasks into more manageable sub-tasks. The optimization procedure and the construction of the forward models and controllers can be performed in similar ways at each level of the hierarchy, which allows the system to be modified to perform other tasks, or to be extended for more complex tasks without retraining lower-levels. PMID:25058706

  7. Robotics-based synthesis of human motion.

    PubMed

    Khatib, O; Demircan, E; De Sapio, V; Sentis, L; Besier, T; Delp, S

    2009-01-01

    The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria. Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present (i) a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, (ii) a task-driven muscular effort minimization criterion and (iii) new human performance metrics for dynamic characterization of athletic skills. Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance. A new criteria of muscular effort minimization has been introduced to analyze human static postures. Extensive motion capture experiments were conducted to validate the new minimization criterion. Finally, new human performance metrics were introduced to study in details an athletic skill. These metrics include the effort expenditure and the feasible set of operational space accelerations during the performance of the skill. The dynamic characterization takes into account skeletal kinematics as well as muscle routing kinematics and force generating capacities. The developments draw upon an advanced musculoskeletal modeling platform and a task-oriented framework for the effective integration of biomechanics and robotics methods.

  8. Paving the COWpath: data-driven design of pediatric order sets

    PubMed Central

    Zhang, Yiye; Padman, Rema; Levin, James E

    2014-01-01

    Objective Evidence indicates that users incur significant physical and cognitive costs in the use of order sets, a core feature of computerized provider order entry systems. This paper develops data-driven approaches for automating the construction of order sets that match closely with user preferences and workflow while minimizing physical and cognitive workload. Materials and methods We developed and tested optimization-based models embedded with clustering techniques using physical and cognitive click cost criteria. By judiciously learning from users’ actual actions, our methods identify items for constituting order sets that are relevant according to historical ordering data and grouped on the basis of order similarity and ordering time. We evaluated performance of the methods using 47 099 orders from the year 2011 for asthma, appendectomy and pneumonia management in a pediatric inpatient setting. Results In comparison with existing order sets, those developed using the new approach significantly reduce the physical and cognitive workload associated with usage by 14–52%. This approach is also capable of accommodating variations in clinical conditions that affect order set usage and development. Discussion There is a critical need to investigate the cognitive complexity imposed on users by complex clinical information systems, and to design their features according to ‘human factors’ best practices. Optimizing order set generation using cognitive cost criteria introduces a new approach that can potentially improve ordering efficiency, reduce unintended variations in order placement, and enhance patient safety. Conclusions We demonstrate that data-driven methods offer a promising approach for designing order sets that are generalizable, data-driven, condition-based, and up to date with current best practices. PMID:24674844

  9. Baryonic effects in cosmic shear tomography: PCA parametrization and importance of extreme baryonic models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohammed, Irshad; Gnedin, Nickolay Y.

    Baryonic effects are amongst the most severe systematics to the tomographic analysis of weak lensing data which is the principal probe in many future generations of cosmological surveys like LSST, Euclid etc.. Modeling or parameterizing these effects is essential in order to extract valuable constraints on cosmological parameters. In a recent paper, Eifler et al. (2015) suggested a reduction technique for baryonic effects by conducting a principal component analysis (PCA) and removing the largest baryonic eigenmodes from the data. In this article, we conducted the investigation further and addressed two critical aspects. Firstly, we performed the analysis by separating the simulations into training and test sets, computing a minimal set of principle components from the training set and examining the fits on the test set. We found that using only four parameters, corresponding to the four largest eigenmodes of the training set, the test sets can be fitted thoroughly with an RMSmore » $$\\sim 0.0011$$. Secondly, we explored the significance of outliers, the most exotic/extreme baryonic scenarios, in this method. We found that excluding the outliers from the training set results in a relatively bad fit and degraded the RMS by nearly a factor of 3. Therefore, for a direct employment of this method to the tomographic analysis of the weak lensing data, the principle components should be derived from a training set that comprises adequately exotic but reasonable models such that the reality is included inside the parameter domain sampled by the training set. The baryonic effects can be parameterized as the coefficients of these principle components and should be marginalized over the cosmological parameter space.« less

  10. Minimal sufficient positive-operator valued measure on a separable Hilbert space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuramochi, Yui, E-mail: kuramochi.yui.22c@st.kyoto-u.ac.jp

    We introduce a concept of a minimal sufficient positive-operator valued measure (POVM), which is the least redundant POVM among the POVMs that have the equivalent information about the measured quantum system. Assuming the system Hilbert space to be separable, we show that for a given POVM, a sufficient statistic called a Lehmann-Scheffé-Bahadur statistic induces a minimal sufficient POVM. We also show that every POVM has an equivalent minimal sufficient POVM and that such a minimal sufficient POVM is unique up to relabeling neglecting null sets. We apply these results to discrete POVMs and information conservation conditions proposed by the author.

  11. High-voltage pulse generator developed for wide-gap spark chambers

    NASA Technical Reports Server (NTRS)

    Keller, L. P.; Walschon, E. G.

    1968-01-01

    Low-inductance, high-capacitance Marx pulse generator provides for minimization of internal inductance and suppression of external electromagnetic radiation. The spark gaps of the generator are enclosed in a pressurized nitrogen atmosphere which allows the charging voltage to be varied by changing the nitrogen pressure.

  12. Laser beam generating apparatus

    DOEpatents

    Warner, B.E.; Duncan, D.B.

    1993-12-28

    Laser beam generating apparatus including a septum segment disposed longitudinally within the tubular structure of the apparatus. The septum provides for radiatively dissipating heat buildup within the tubular structure and for generating relatively uniform laser beam pulses so as to minimize or eliminate radial pulse delays (the chevron effect). 11 figures.

  13. X-ray beam equalization for digital fluoroscopy

    NASA Astrophysics Data System (ADS)

    Molloi, Sabee Y.; Tang, Jerry; Marcin, Martin R.; Zhou, Yifang; Anvar, Behzad

    1996-04-01

    The concept of radiographic equalization has previously been investigated. However, a suitable technique for digital fluoroscopic applications has not been developed. The previously reported scanning equalization techniques cannot be applied to fluoroscopic applications due to their exposure time limitations. On the other hand, area beam equalization techniques are more suited for digital fluoroscopic applications. The purpose of this study is to develop an x- ray beam equalization technique for digital fluoroscopic applications that will produce an equalized radiograph with minimal image artifacts and tube loading. Preliminary unequalized images of a humanoid chest phantom were acquired using a digital fluoroscopic system. Using this preliminary image as a guide, an 8 by 8 array of square pistons were used to generate masks in a mold with CeO2. The CeO2 attenuator thicknesses were calculated using the gray level information from the unequalized image. The generated mask was positioned close to the focal spot (magnification of 8.0) in order to minimize edge artifacts from the mask. The masks were generated manually in order to investigate the piston and matrix size requirements. The development of an automated version of mask generation and positioning is in progress. The results of manual mask generation and positioning show that it is possible to generate equalized radiographs with minimal perceptible artifacts. The equalization of x-ray transmission across the field exiting from the object significantly improved the image quality by preserving local contrast throughout the image. Furthermore, the reduction in dynamic range significantly reduced the effect of x-ray scatter and veiling glare from high transmission to low transmission areas. Also, the x-ray tube loading due to the mask assembly itself was negligible. In conclusion it is possible to produce area beam compensation that will be compatible with digital fluoroscopy with minimal compensation artifacts. The compensation process produces an image with equalized signal to noise ratio in all parts of the image.

  14. Youth Sports Clubs' Potential as Health-Promoting Setting: Profiles, Motives and Barriers

    ERIC Educational Resources Information Center

    Meganck, Jeroen; Scheerder, Jeroen; Thibaut, Erik; Seghers, Jan

    2015-01-01

    Setting and Objective: For decades, the World Health Organisation has promoted settings-based health promotion, but its application to leisure settings is minimal. Focusing on organised sports as an important leisure activity, the present study had three goals: exploring the health promotion profile of youth sports clubs, identifying objective…

  15. [siRNAs with high specificity to the target: a systematic design by CRM algorithm].

    PubMed

    Alsheddi, T; Vasin, L; Meduri, R; Randhawa, M; Glazko, G; Baranova, A

    2008-01-01

    'Off-target' silencing effect hinders the development of siRNA-based therapeutic and research applications. Common solution to this problem is an employment of the BLAST that may miss significant alignments or an exhaustive Smith-Waterman algorithm that is very time-consuming. We have developed a Comprehensive Redundancy Minimizer (CRM) approach for mapping all unique sequences ("targets") 9-to-15 nt in size within large sets of sequences (e.g. transcriptomes). CRM outputs a list of potential siRNA candidates for every transcript of the particular species. These candidates could be further analyzed by traditional "set-of-rules" types of siRNA designing tools. For human, 91% of transcripts are covered by candidate siRNAs with kernel targets of N = 15. We tested our approach on the collection of previously described experimentally assessed siRNAs and found that the correlation between efficacy and presence in CRM-approved set is significant (r = 0.215, p-value = 0.0001). An interactive database that contains a precompiled set of all human siRNA candidates with minimized redundancy is available at http://129.174.194.243. Application of the CRM-based filtering minimizes potential "off-target" silencing effects and could improve routine siRNA applications.

  16. Artificial neural network classification using a minimal training set - Comparison to conventional supervised classification

    NASA Technical Reports Server (NTRS)

    Hepner, George F.; Logan, Thomas; Ritter, Niles; Bryant, Nevin

    1990-01-01

    Recent research has shown an artificial neural network (ANN) to be capable of pattern recognition and the classification of image data. This paper examines the potential for the application of neural network computing to satellite image processing. A second objective is to provide a preliminary comparison and ANN classification. An artificial neural network can be trained to do land-cover classification of satellite imagery using selected sites representative of each class in a manner similar to conventional supervised classification. One of the major problems associated with recognition and classifications of pattern from remotely sensed data is the time and cost of developing a set of training sites. This reseach compares the use of an ANN back propagation classification procedure with a conventional supervised maximum likelihood classification procedure using a minimal training set. When using a minimal training set, the neural network is able to provide a land-cover classification superior to the classification derived from the conventional classification procedure. This research is the foundation for developing application parameters for further prototyping of software and hardware implementations for artificial neural networks in satellite image and geographic information processing.

  17. Data identification for improving gene network inference using computational algebra.

    PubMed

    Dimitrova, Elena; Stigler, Brandilyn

    2014-11-01

    Identification of models of gene regulatory networks is sensitive to the amount of data used as input. Considering the substantial costs in conducting experiments, it is of value to have an estimate of the amount of data required to infer the network structure. To minimize wasted resources, it is also beneficial to know which data are necessary to identify the network. Knowledge of the data and knowledge of the terms in polynomial models are often required a priori in model identification. In applications, it is unlikely that the structure of a polynomial model will be known, which may force data sets to be unnecessarily large in order to identify a model. Furthermore, none of the known results provides any strategy for constructing data sets to uniquely identify a model. We provide a specialization of an existing criterion for deciding when a set of data points identifies a minimal polynomial model when its monomial terms have been specified. Then, we relax the requirement of the knowledge of the monomials and present results for model identification given only the data. Finally, we present a method for constructing data sets that identify minimal polynomial models.

  18. Edge guided image reconstruction in linear scan CT by weighted alternating direction TV minimization.

    PubMed

    Cai, Ailong; Wang, Linyuan; Zhang, Hanming; Yan, Bin; Li, Lei; Xi, Xiaoqi; Li, Jianxin

    2014-01-01

    Linear scan computed tomography (CT) is a promising imaging configuration with high scanning efficiency while the data set is under-sampled and angularly limited for which high quality image reconstruction is challenging. In this work, an edge guided total variation minimization reconstruction (EGTVM) algorithm is developed in dealing with this problem. The proposed method is modeled on the combination of total variation (TV) regularization and iterative edge detection strategy. In the proposed method, the edge weights of intermediate reconstructions are incorporated into the TV objective function. The optimization is efficiently solved by applying alternating direction method of multipliers. A prudential and conservative edge detection strategy proposed in this paper can obtain the true edges while restricting the errors within an acceptable degree. Based on the comparison on both simulation studies and real CT data set reconstructions, EGTVM provides comparable or even better quality compared to the non-edge guided reconstruction and adaptive steepest descent-projection onto convex sets method. With the utilization of weighted alternating direction TV minimization and edge detection, EGTVM achieves fast and robust convergence and reconstructs high quality image when applied in linear scan CT with under-sampled data set.

  19. Reconstruction of Microraptor and the evolution of iridescent plumage.

    PubMed

    Li, Quanguo; Gao, Ke-Qin; Meng, Qingjin; Clarke, Julia A; Shawkey, Matthew D; D'Alba, Liliana; Pei, Rui; Ellison, Mick; Norell, Mark A; Vinther, Jakob

    2012-03-09

    Iridescent feather colors involved in displays of many extant birds are produced by nanoscale arrays of melanin-containing organelles (melanosomes). Data relevant to the evolution of these colors and the properties of melanosomes involved in their generation have been limited. A data set sampling variables of extant avian melanosomes reveals that those forming most iridescent arrays are distinctly narrow. Quantitative comparison of these data with melanosome imprints densely sampled from a previously unknown specimen of the Early Cretaceous feathered Microraptor predicts that its plumage was predominantly iridescent. The capacity for simple iridescent arrays is thus minimally inferred in paravian dinosaurs. This finding and estimation of Microraptor feathering consistent with an ornamental function for the tail suggest a centrality for signaling in early evolution of plumage and feather color.

  20. Loss-resistant unambiguous phase measurement

    NASA Astrophysics Data System (ADS)

    Dinani, Hossein T.; Berry, Dominic W.

    2014-08-01

    Entangled multiphoton states have the potential to provide improved measurement accuracy, but are sensitive to photon loss. It is possible to calculate ideal loss-resistant states that maximize the Fisher information, but it is unclear how these could be experimentally generated. Here we propose a set of states that can be obtained by processing the output from parametric down-conversion. Although these states are not optimal, they provide performance very close to that of optimal states for a range of parameters. Moreover, we show how to use sequences of such states in order to obtain an unambiguous phase measurement that beats the standard quantum limit. We consider the optimization of parameters in order to minimize the final phase variance, and find that the optimum parameters are different from those that maximize the Fisher information.

  1. Using Deep Learning to Analyze the Voices of Stars.

    NASA Astrophysics Data System (ADS)

    Boudreaux, Thomas Macaulay

    2018-01-01

    With several new large-scale surveys on the horizon, including LSST, TESS, ZTF, and Evryscope, faster and more accurate analysis methods will be required to adequately process the enormous amount of data produced. Deep learning, used in industry for years now, allows for advanced feature detection in minimally prepared datasets at very high speeds; however, despite the advantages of this method, its application to astrophysics has not yet been extensively explored. This dearth may be due to a lack of training data available to researchers. Here we generate synthetic data loosely mimicking the properties of acoustic mode pulsating stars and compare the performance of different deep learning algorithms, including Artifical Neural Netoworks, and Convolutional Neural Networks, in classifing these synthetic data sets as either pulsators, or not observed to vary stars.

  2. Real-time ultrasound transducer localization in fluoroscopy images by transfer learning from synthetic training data.

    PubMed

    Heimann, Tobias; Mountney, Peter; John, Matthias; Ionasec, Razvan

    2014-12-01

    The fusion of image data from trans-esophageal echography (TEE) and X-ray fluoroscopy is attracting increasing interest in minimally-invasive treatment of structural heart disease. In order to calculate the needed transformation between both imaging systems, we employ a discriminative learning (DL) based approach to localize the TEE transducer in X-ray images. The successful application of DL methods is strongly dependent on the available training data, which entails three challenges: (1) the transducer can move with six degrees of freedom meaning it requires a large number of images to represent its appearance, (2) manual labeling is time consuming, and (3) manual labeling has inherent errors. This paper proposes to generate the required training data automatically from a single volumetric image of the transducer. In order to adapt this system to real X-ray data, we use unlabeled fluoroscopy images to estimate differences in feature space density and correct covariate shift by instance weighting. Two approaches for instance weighting, probabilistic classification and Kullback-Leibler importance estimation (KLIEP), are evaluated for different stages of the proposed DL pipeline. An analysis on more than 1900 images reveals that our approach reduces detection failures from 7.3% in cross validation on the test set to zero and improves the localization error from 1.5 to 0.8mm. Due to the automatic generation of training data, the proposed system is highly flexible and can be adapted to any medical device with minimal efforts. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. A low cost solution for post-biopsy complications using available RFA generator and coaxial core biopsy needle.

    PubMed

    Azlan, C A; Mohd Nasir, N F; Saifizul, A A; Faizul, M S; Ng, K H; Abdullah, B J J

    2007-12-01

    Percutaneous image-guided needle biopsy is typically performed in highly vascular organs or in tumours with rich macroscopic and microscopic blood supply. The main risks related to this procedure are haemorrhage and implantation of tumour cells in the needle tract after the biopsy needle is withdrawn. From numerous conducted studies, it was found that heating the needle tract using alternating current in radiofrequency (RF) range has a potential to minimize these effects. However, this solution requires the use of specially designed needles, which would make the procedure relatively expensive and complicated. Thus, we propose a simple solution by using readily available coaxial core biopsy needles connected to a radiofrequency ablation (RFA) generator. In order to do so, we have designed and developed an adapter to interface between these two devices. For evaluation purpose, we used a bovine liver as a sample tissue. The experimental procedure was done to study the effect of different parameter settings on the size of coagulation necrosis caused by the RF current heating on the subject. The delivery of the RF energy was varied by changing the values for delivered power, power delivery duration, and insertion depth. The results showed that the size of the coagulation necrosis is affected by all of the parameters tested. In general, the size of the region is enlarged with higher delivery of RF power, longer duration of power delivery, and shallower needle insertion and become relatively constant after a certain value. We also found that the solution proposed provides a low cost and practical way to minimizes unwanted post-biopsy effects.

  4. Nondimensional parameter for conformal grinding: combining machine and process parameters

    NASA Astrophysics Data System (ADS)

    Funkenbusch, Paul D.; Takahashi, Toshio; Gracewski, Sheryl M.; Ruckman, Jeffrey L.

    1999-11-01

    Conformal grinding of optical materials with CNC (Computer Numerical Control) machining equipment can be used to achieve precise control over complex part configurations. However complications can arise due to the need to fabricate complex geometrical shapes at reasonable production rates. For example high machine stiffness is essential, but the need to grind 'inside' small or highly concave surfaces may require use of tooling with less than ideal stiffness characteristics. If grinding generates loads sufficient for significant tool deflection, the programmed removal depth will not be achieved. Moreover since grinding load is a function of the volumetric removal rate the amount of load deflection can vary with location on the part, potentially producing complex figure errors. In addition to machine/tool stiffness and removal rate, load generation is a function of the process parameters. For example by reducing the feed rate of the tool into the part, both the load and resultant deflection/removal error can be decreased. However this must be balanced against the need for part through put. In this paper a simple model which permits combination of machine stiffness and process parameters into a single non-dimensional parameter is adapted for a conformal grinding geometry. Errors in removal can be minimized by maintaining this parameter above a critical value. Moreover, since the value of this parameter depends on the local part geometry, it can be used to optimize process settings during grinding. For example it may be used to guide adjustment of the feed rate as a function of location on the part to eliminate figure errors while minimizing the total grinding time required.

  5. Recovery and recycling practices in municipal solid waste management in Lagos, Nigeria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kofoworola, O.F.

    The population of Lagos, the largest city in Nigeria, increased seven times from 1950 to 1980 with a current population of over 10 million inhabitants. The majority of the city's residents are poor. The residents make a heavy demand on resources and, at the same time, generate large quantities of solid waste. Approximately 4 million tonnes of municipal solid waste (MSW) is generated annually in the city, including approximately 0.5 million of untreated industrial waste. This is approximately 1.1 kg/cap/day. Efforts by the various waste management agencies set up by the state government to keep its streets and neighborhoods cleanmore » have achieved only minimal success. This is because more than half of these wastes are left uncollected from the streets and the various locations due to the inadequacy and inefficiency of the waste management system. Whilst the benefits of proper solid waste management (SWM), such as increased revenues for municipal bodies, higher productivity rate, improved sanitation standards and better health conditions, cannot be overemphasized, it is important that there is a reduction in the quantity of recoverable materials in residential and commercial waste streams to minimize the problem of MSW disposal. This paper examines the status of recovery and recycling in current waste management practice in Lagos, Nigeria. Existing recovery and recycling patterns, recovery and recycling technologies, approaches to materials recycling, and the types of materials recovered from MSW are reviewed. Based on these, strategies for improving recovery and recycling practices in the management of MSW in Lagos, Nigeria are suggested.« less

  6. Computer numerically controlled (CNC) aspheric shaping with toroidal Wheels (Abstract Only)

    NASA Astrophysics Data System (ADS)

    Ketelsen, D.; Kittrell, W. C.; Kuhn, W. M.; Parks, R. E.; Lamb, George L.; Baker, Lynn

    1987-01-01

    Contouring with computer numerically controlled (CNC) machines can be accomplished with several different tool geometries and coordinated machine axes. To minimize the number of coordinated axes for nonsymmetric work to three, it is common practice to use a spherically shaped tool such as a ball-end mill. However, to minimize grooving due to the feed and ball radius, it is desirable to use a long ball radius, but there is clearly a practical limit to ball diameter with the spherical tool. We have found that the use of commercially available toroidal wheels permits long effective cutting radii, which in turn improve finish and minimize grooving for a set feed. In addition, toroidal wheels are easier than spherical wheels to center accurately. Cutting parameters are also easier to control because the feed rate past the tool does not change as the slope of the work changes. The drawback to the toroidal wheel is the more complex calculation of the tool path. Of course, once the algorithm is worked out, the tool path is as easily calculated as for a spherical tool. We have performed two experiments with the Large Optical Generator (LOG) that were ideally suited to three-axis contouring--surfaces that have no axis of rotational symmetry. By oscillating the cutting head horizontally or vertically (in addition to the motions required to generate the power of the surface) , and carefully coordinating those motions with table rotation, the mostly astigmatic departure for these surfaces is produced. The first experiment was a pair of reflector molds that together correct the spherical aberration of the Arecibo radio telescope. The larger of these was 5 m in diameter and had a 12 cm departure from the best-fit sphere. The second experiment was the generation of a purely astigmatic surface to demonstrate the feasibility of producing axially symmetric asphe.rics while mounted and rotated about any off-axis point. Measurements of the latter (the first experiment had relatively loose tolerances) indicate an accuracy only 3 or 4 times that achieved by conventional two-axis contouring (10 AM as opposed to 3 pm rms) The successful completion of these projects demonstrates the successful application of three-axis contouring with the LOG. Toroidal cutters have also solved many of the drawbacks of spherical wheels. Work remains to be done in improving machine response and decreasing the contribution of backlash errors.

  7. Fast graph-based relaxed clustering for large data sets using minimal enclosing ball.

    PubMed

    Qian, Pengjiang; Chung, Fu-Lai; Wang, Shitong; Deng, Zhaohong

    2012-06-01

    Although graph-based relaxed clustering (GRC) is one of the spectral clustering algorithms with straightforwardness and self-adaptability, it is sensitive to the parameters of the adopted similarity measure and also has high time complexity O(N(3)) which severely weakens its usefulness for large data sets. In order to overcome these shortcomings, after introducing certain constraints for GRC, an enhanced version of GRC [constrained GRC (CGRC)] is proposed to increase the robustness of GRC to the parameters of the adopted similarity measure, and accordingly, a novel algorithm called fast GRC (FGRC) based on CGRC is developed in this paper by using the core-set-based minimal enclosing ball approximation. A distinctive advantage of FGRC is that its asymptotic time complexity is linear with the data set size N. At the same time, FGRC also inherits the straightforwardness and self-adaptability from GRC, making the proposed FGRC a fast and effective clustering algorithm for large data sets. The advantages of FGRC are validated by various benchmarking and real data sets.

  8. Global dynamics of the Escherichia coli proteome and phosphoproteome during growth in minimal medium.

    PubMed

    Soares, Nelson C; Spät, Philipp; Krug, Karsten; Macek, Boris

    2013-06-07

    Recent phosphoproteomics studies have generated relatively large data sets of bacterial proteins phosphorylated on serine, threonine, and tyrosine, implicating this type of phosphorylation in the regulation of vital processes of a bacterial cell; however, most phosphoproteomics studies in bacteria were so far qualitative. Here we applied stable isotope labeling by amino acids in cell culture (SILAC) to perform a quantitative analysis of proteome and phosphoproteome dynamics of Escherichia coli during five distinct phases of growth in the minimal medium. Combining two triple-SILAC experiments, we detected a total of 2118 proteins and quantified relative dynamics of 1984 proteins in all measured phases of growth, including 570 proteins associated with cell wall and membrane. In the phosphoproteomic experiment, we detected 150 Ser/Thr/Tyr phosphorylation events, of which 108 were localized to a specific amino acid residue and 76 were quantified in all phases of growth. Clustering analysis of SILAC ratios revealed distinct sets of coregulated proteins for each analyzed phase of growth and overrepresentation of membrane proteins in transition between exponential and stationary phases. The proteomics data indicated that proteins related to stress response typically associated with the stationary phase, including RpoS-dependent proteins, had increasing levels already during earlier phases of growth. Application of SILAC enabled us to measure median occupancies of phosphorylation sites, which were generally low (<12%). Interestingly, the phosphoproteome analysis showed a global increase of protein phosphorylation levels in the late stationary phase, pointing to a likely role of this modification in later phases of growth.

  9. Data splitting for artificial neural networks using SOM-based stratified sampling.

    PubMed

    May, R J; Maier, H R; Dandy, G C

    2010-03-01

    Data splitting is an important consideration during artificial neural network (ANN) development where hold-out cross-validation is commonly employed to ensure generalization. Even for a moderate sample size, the sampling methodology used for data splitting can have a significant effect on the quality of the subsets used for training, testing and validating an ANN. Poor data splitting can result in inaccurate and highly variable model performance; however, the choice of sampling methodology is rarely given due consideration by ANN modellers. Increased confidence in the sampling is of paramount importance, since the hold-out sampling is generally performed only once during ANN development. This paper considers the variability in the quality of subsets that are obtained using different data splitting approaches. A novel approach to stratified sampling, based on Neyman sampling of the self-organizing map (SOM), is developed, with several guidelines identified for setting the SOM size and sample allocation in order to minimize the bias and variance in the datasets. Using an example ANN function approximation task, the SOM-based approach is evaluated in comparison to random sampling, DUPLEX, systematic stratified sampling, and trial-and-error sampling to minimize the statistical differences between data sets. Of these approaches, DUPLEX is found to provide benchmark performance with good model performance, with no variability. The results show that the SOM-based approach also reliably generates high-quality samples and can therefore be used with greater confidence than other approaches, especially in the case of non-uniform datasets, with the benefit of scalability to perform data splitting on large datasets. Copyright 2009 Elsevier Ltd. All rights reserved.

  10. Laser beam generating apparatus

    DOEpatents

    Warner, B.E.; Duncan, D.B.

    1994-02-15

    Laser beam generating apparatus including a septum segment disposed longitudinally within the tubular structure of the apparatus is described. The septum provides for radiatively dissipating heat buildup within the tubular structure and for generating relatively uniform laser beam pulses so as to minimize or eliminate radial pulse delays (the chevron effect). 7 figures.

  11. Ultrafast treatment plan optimization for volumetric modulated arc therapy (VMAT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Men Chunhua; Romeijn, H. Edwin; Jia Xun

    2010-11-15

    Purpose: To develop a novel aperture-based algorithm for volumetric modulated arc therapy (VMAT) treatment plan optimization with high quality and high efficiency. Methods: The VMAT optimization problem is formulated as a large-scale convex programming problem solved by a column generation approach. The authors consider a cost function consisting two terms, the first enforcing a desired dose distribution and the second guaranteeing a smooth dose rate variation between successive gantry angles. A gantry rotation is discretized into 180 beam angles and for each beam angle, only one MLC aperture is allowed. The apertures are generated one by one in a sequentialmore » way. At each iteration of the column generation method, a deliverable MLC aperture is generated for one of the unoccupied beam angles by solving a subproblem with the consideration of MLC mechanic constraints. A subsequent master problem is then solved to determine the dose rate at all currently generated apertures by minimizing the cost function. When all 180 beam angles are occupied, the optimization completes, yielding a set of deliverable apertures and associated dose rates that produce a high quality plan. Results: The algorithm was preliminarily tested on five prostate and five head-and-neck clinical cases, each with one full gantry rotation without any couch/collimator rotations. High quality VMAT plans have been generated for all ten cases with extremely high efficiency. It takes only 5-8 min on CPU (MATLAB code on an Intel Xeon 2.27 GHz CPU) and 18-31 s on GPU (CUDA code on an NVIDIA Tesla C1060 GPU card) to generate such plans. Conclusions: The authors have developed an aperture-based VMAT optimization algorithm which can generate clinically deliverable high quality treatment plans at very high efficiency.« less

  12. Ultrafast treatment plan optimization for volumetric modulated arc therapy (VMAT).

    PubMed

    Men, Chunhua; Romeijn, H Edwin; Jia, Xun; Jiang, Steve B

    2010-11-01

    To develop a novel aperture-based algorithm for volumetric modulated are therapy (VMAT) treatment plan optimization with high quality and high efficiency. The VMAT optimization problem is formulated as a large-scale convex programming problem solved by a column generation approach. The authors consider a cost function consisting two terms, the first enforcing a desired dose distribution and the second guaranteeing a smooth dose rate variation between successive gantry angles. A gantry rotation is discretized into 180 beam angles and for each beam angle, only one MLC aperture is allowed. The apertures are generated one by one in a sequential way. At each iteration of the column generation method, a deliverable MLC aperture is generated for one of the unoccupied beam angles by solving a subproblem with the consideration of MLC mechanic constraints. A subsequent master problem is then solved to determine the dose rate at all currently generated apertures by minimizing the cost function. When all 180 beam angles are occupied, the optimization completes, yielding a set of deliverable apertures and associated dose rates that produce a high quality plan. The algorithm was preliminarily tested on five prostate and five head-and-neck clinical cases, each with one full gantry rotation without any couch/collimator rotations. High quality VMAT plans have been generated for all ten cases with extremely high efficiency. It takes only 5-8 min on CPU (MATLAB code on an Intel Xeon 2.27 GHz CPU) and 18-31 s on GPU (CUDA code on an NVIDIA Tesla C1060 GPU card) to generate such plans. The authors have developed an aperture-based VMAT optimization algorithm which can generate clinically deliverable high quality treatment plans at very high efficiency.

  13. POLLUTION PREVENTION ASSESSMENT FOR A MANUFACTURER OF LOCKING DEVICES (EPA/600/S-95/013)

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at selected u...

  14. ENVIRONMENTAL RESEARCH BRIEF: POLLUTION PREVENTION ASSESSMENT FOR A MANUFACTURER OF FOOD SERVICE EQUIPMENT

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. n an effort to assist these manufacturers Waste Minimization Assessment Cent...

  15. POLLUTION PREVENTION ASSESSMENT FOR A MANUFACTURER OF BOURBON WHISKEY (EPA/600/S-95/010

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at selected u...

  16. POLLUTION PREVENTION ASSESSMENT FOR A MANUFACTURER OF POWER SUPPLIES (EPA/600/S-95/025)

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at selected u...

  17. POLLUTION PREVENTION ASSESSMENT FOR A MANUFACTURER OF METAL FASTENERS (EPA/600/S-95/016)

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. Waste Minimization Assessment Centers (WMACs) were established at selected u...

  18. ENVIRONMENTAL RESEARCH BRIEF: POLLUTION PREVENTION FOR A MANUFACTURER OF METAL FASTENERS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. n an effort to assist these manufacturers Waste Minimization Assessment Cent...

  19. ENVIRONMENTAL RESEARCH BRIEF: POLLUTION PREVENTION ASSESSMENT FOR A MANUFACTURER OF REBUILT INDUSTRIAL CRANKSHAFTS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) has funded a pilot project to assist small and medium-size manufacturers who want to minimize their generation of waste but who lack the expertise to do so. n an effort to assist these manufacturers Waste Minimization Assessment Cent...

  20. Process Waste Assessment Machine and Fabrication Shop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, N.M.

    1993-03-01

    This Process Waste Assessment was conducted to evaluate hazardous wastes generated in the Machine and Fabrication Shop at Sandia National Laboratories, Bonding 913, Room 119. Spent machine coolant is the major hazardous chemical waste generated in this facility. The volume of spent coolant generated is approximately 150 gallons/month. It is sent off-site to a recycler, but a reclaiming system for on-site use is being investigated. The Shop`s line management considers hazardous waste minimization very important. A number of steps have already been taken to minimize wastes, including replacement of a hazardous solvent with biodegradable, non-caustic solution and filtration unit; wastemore » segregation; restriction of beryllium-copper alloy machining; and reduction of lead usage.« less

  1. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    PubMed

    Almutairy, Meznah; Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  2. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches

    PubMed Central

    Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989

  3. Optimization of active distribution networks: Design and analysis of significative case studies for enabling control actions of real infrastructure

    NASA Astrophysics Data System (ADS)

    Moneta, Diana; Mora, Paolo; Viganò, Giacomo; Alimonti, Gianluca

    2014-12-01

    The diffusion of Distributed Generation (DG) based on Renewable Energy Sources (RES) requires new strategies to ensure reliable and economic operation of the distribution networks and to support the diffusion of DG itself. An advanced algorithm (DISCoVER - DIStribution Company VoltagE Regulator) is being developed to optimize the operation of active network by means of an advanced voltage control based on several regulations. Starting from forecasted load and generation, real on-field measurements, technical constraints and costs for each resource, the algorithm generates for each time period a set of commands for controllable resources that guarantees achievement of technical goals minimizing the overall cost. Before integrating the controller into the telecontrol system of the real networks, and in order to validate the proper behaviour of the algorithm and to identify possible critical conditions, a complete simulation phase has started. The first step is concerning the definition of a wide range of "case studies", that are the combination of network topology, technical constraints and targets, load and generation profiles and "costs" of resources that define a valid context to test the algorithm, with particular focus on battery and RES management. First results achieved from simulation activity on test networks (based on real MV grids) and actual battery characteristics are given, together with prospective performance on real case applications.

  4. Gap-minimal systems of notations and the constructible hierarchy

    NASA Technical Reports Server (NTRS)

    Lucian, M. L.

    1972-01-01

    If a constructibly countable ordinal alpha is a gap ordinal, then the order type of the set of index ordinals smaller than alpha is exactly alpha. The gap ordinals are the only points of discontinuity of a certain ordinal-valued function. The notion of gap minimality for well ordered systems of notations is defined, and the existence of gap-minimal systems of notations of arbitrarily large constructibly countable length is established.

  5. Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information

    PubMed Central

    Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing

    2016-01-01

    Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft’s algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms. PMID:27806102

  6. Universal behavior of generalized causal set d’Alembertians in curved spacetime

    NASA Astrophysics Data System (ADS)

    Belenchia, Alessio

    2016-07-01

    Causal set non-local wave operators allow both for the definition of an action for causal set theory and the study of deviations from local physics that may have interesting phenomenological consequences. It was previously shown that, in all dimensions, the (unique) minimal discrete operators give averaged continuum non-local operators that reduce to \\square -R/2 in the local limit. Recently, dropping the constraint of minimality, it was shown that there exist an infinite number of discrete operators satisfying basic physical requirements and with the right local limit in flat spacetime. In this work, we consider this entire class of generalized causal set d’Alembertins in curved spacetimes and extend to them the result about the universality of the -R/2 factor. Finally, we comment on the relation of this result to the Einstein equivalence principle.

  7. Advanced Design of Dumbbell-shaped Genetic Minimal Vectors Improves Non-coding and Coding RNA Expression.

    PubMed

    Jiang, Xiaoou; Yu, Han; Teo, Cui Rong; Tan, Genim Siu Xian; Goh, Sok Chin; Patel, Parasvi; Chua, Yiqiang Kevin; Hameed, Nasirah Banu Sahul; Bertoletti, Antonio; Patzel, Volker

    2016-09-01

    Dumbbell-shaped DNA minimal vectors lacking nontherapeutic genes and bacterial sequences are considered a stable, safe alternative to viral, nonviral, and naked plasmid-based gene-transfer systems. We investigated novel molecular features of dumbbell vectors aiming to reduce vector size and to improve the expression of noncoding or coding RNA. We minimized small hairpin RNA (shRNA) or microRNA (miRNA) expressing dumbbell vectors in size down to 130 bp generating the smallest genetic expression vectors reported. This was achieved by using a minimal H1 promoter with integrated transcriptional terminator transcribing the RNA hairpin structure around the dumbbell loop. Such vectors were generated with high conversion yields using a novel protocol. Minimized shRNA-expressing dumbbells showed accelerated kinetics of delivery and transcription leading to enhanced gene silencing in human tissue culture cells. In primary human T cells, minimized miRNA-expressing dumbbells revealed higher stability and triggered stronger target gene suppression as compared with plasmids and miRNA mimics. Dumbbell-driven gene expression was enhanced up to 56- or 160-fold by implementation of an intron and the SV40 enhancer compared with control dumbbells or plasmids. Advanced dumbbell vectors may represent one option to close the gap between durable expression that is achievable with integrating viral vectors and short-term effects triggered by naked RNA.

  8. Lake Wobegon Dice

    ERIC Educational Resources Information Center

    Moraleda, Jorge; Stork, David G.

    2012-01-01

    We introduce Lake Wobegon dice, where each die is "better than the set average." Specifically, these dice have the paradoxical property that on every roll, each die is more likely to roll greater than the set average on the roll, than less than this set average. We also show how to construct minimal optimal Lake Wobegon sets for all "n" [greater…

  9. a Method for the Registration of Hemispherical Photographs and Tls Intensity Images

    NASA Astrophysics Data System (ADS)

    Schmidt, A.; Schilling, A.; Maas, H.-G.

    2012-07-01

    Terrestrial laser scanners generate dense and accurate 3D point clouds with minimal effort, which represent the geometry of real objects, while image data contains texture information of object surfaces. Based on the complementary characteristics of both data sets, a combination is very appealing for many applications, including forest-related tasks. In the scope of our research project, independent data sets of a plain birch stand have been taken by a full-spherical laser scanner and a hemispherical digital camera. Previously, both kinds of data sets have been considered separately: Individual trees were successfully extracted from large 3D point clouds, and so-called forest inventory parameters could be determined. Additionally, a simplified tree topology representation was retrieved. From hemispherical images, leaf area index (LAI) values, as a very relevant parameter for describing a stand, have been computed. The objective of our approach is to merge a 3D point cloud with image data in a way that RGB values are assigned to each 3D point. So far, segmentation and classification of TLS point clouds in forestry applications was mainly based on geometrical aspects of the data set. However, a 3D point cloud with colour information provides valuable cues exceeding simple statistical evaluation of geometrical object features and thus may facilitate the analysis of the scan data significantly.

  10. Space communications scheduler: A rule-based approach to adaptive deadline scheduling

    NASA Technical Reports Server (NTRS)

    Straguzzi, Nicholas

    1990-01-01

    Job scheduling is a deceptively complex subfield of computer science. The highly combinatorial nature of the problem, which is NP-complete in nearly all cases, requires a scheduling program to intelligently transverse an immense search tree to create the best possible schedule in a minimal amount of time. In addition, the program must continually make adjustments to the initial schedule when faced with last-minute user requests, cancellations, unexpected device failures, quests, cancellations, unexpected device failures, etc. A good scheduler must be quick, flexible, and efficient, even at the expense of generating slightly less-than-optimal schedules. The Space Communication Scheduler (SCS) is an intelligent rule-based scheduling system. SCS is an adaptive deadline scheduler which allocates modular communications resources to meet an ordered set of user-specified job requests on board the NASA Space Station. SCS uses pattern matching techniques to detect potential conflicts through algorithmic and heuristic means. As a result, the system generates and maintains high density schedules without relying heavily on backtracking or blind search techniques. SCS is suitable for many common real-world applications.

  11. Assimilation of Sea Color Data Into A Three Dimensional Biogeochemical Model: Sensitivity Experiments

    NASA Astrophysics Data System (ADS)

    Echevin, V.; Levy, M.; Memery, L.

    The assimilation of two dimensional sea color data fields into a 3 dimensional coupled dynamical-biogeochemical model is performed using a 4DVAR algorithm. The biogeochemical model includes description of nitrates, ammonium, phytoplancton, zooplancton, detritus and dissolved organic matter. A subset of the biogeochemical model poorly known parameters (for example,phytoplancton growth, mortality,grazing) are optimized by minimizing a cost function measuring misfit between the observations and the model trajectory. Twin experiments are performed with an eddy resolving model of 5 km resolution in an academic configuration. Starting from oligotrophic conditions, an initially unstable baroclinic anticyclone splits into several eddies. Strong vertical velocities advect nitrates into the euphotic zone and generate a phytoplancton bloom. Biogeochemical parameters are perturbed to generate surface pseudo-observations of chlorophyll,which are assimilated in the model in order to retrieve the correct parameter perturbations. The impact of the type of measurement (quasi-instantaneous, daily mean, weekly mean) onto the retrieved set of parameters is analysed. Impacts of additional subsurface measurements and of errors in the circulation are also presented.

  12. Chaotic Dynamics of Linguistic-Like Processes at the Syntactical and Semantic Levels: in the Pursuit of a Multifractal Attractor

    NASA Astrophysics Data System (ADS)

    Nicolis, John S.; Katsikas, Anastassis A.

    Collective parameters such as the Zipf's law-like statistics, the Transinformation, the Block Entropy and the Markovian character are compared for natural, genetic, musical and artificially generated long texts from generating partitions (alphabets) on homogeneous as well as on multifractal chaotic maps. It appears that minimal requirements for a language at the syntactical level such as memory, selectivity of few keywords and broken symmetry in one dimension (polarity) are more or less met by dynamically iterating simple maps or flows e.g. very simple chaotic hardware. The same selectivity is observed at the semantic level where the aim refers to partitioning a set of enviromental impinging stimuli onto coexisting attractors-categories. Under the regime of pattern recognition and classification, few key features of a pattern or few categories claim the lion's share of the information stored in this pattern and practically, only these key features are persistently scanned by the cognitive processor. A multifractal attractor model can in principle explain this high selectivity, both at the syntactical and the semantic levels.

  13. A method for real-time generation of augmented reality work instructions via expert movements

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Bhaskar; Winer, Eliot

    2015-03-01

    Augmented Reality (AR) offers tremendous potential for a wide range of fields including entertainment, medicine, and engineering. AR allows digital models to be integrated with a real scene (typically viewed through a video camera) to provide useful information in a variety of contexts. The difficulty in authoring and modifying scenes is one of the biggest obstacles to widespread adoption of AR. 3D models must be created, textured, oriented and positioned to create the complex overlays viewed by a user. This often requires using multiple software packages in addition to performing model format conversions. In this paper, a new authoring tool is presented which uses a novel method to capture product assembly steps performed by a user with a depth+RGB camera. Through a combination of computer vision and imaging process techniques, each individual step is decomposed into objects and actions. The objects are matched to those in a predetermined geometry library and the actions turned into animated assembly steps. The subsequent instruction set is then generated with minimal user input. A proof of concept is presented to establish the method's viability.

  14. Early warning smartphone diagnostics for water security and analysis using real-time pH mapping

    NASA Astrophysics Data System (ADS)

    Hossain, Md. Arafat; Canning, John; Ast, Sandra; Rutledge, Peter J.; Jamalipour, Abbas

    2015-12-01

    Early detection of environmental disruption, unintentional or otherwise, is increasingly desired to ensure hazard minimization in many settings. Here, using a field-portable, smartphone fluorimeter to assess water quality based on the pH response of a designer probe, a map of pH of public tap water sites has been obtained. A custom designed Android application digitally processed and mapped the results utilizing the global positioning system (GPS) service of the smartphone. The map generated indicates no disruption in pH for all sites measured, and all the data are assessed to fall inside the upper limit of local government regulations, consistent with authority reported measurements. This implementation demonstrates a new security concept: network environmental forensics utilizing the potential of novel smartgrid analysis with wireless sensors for the detection of potential disruption to water quality at any point in the city. This concept is applicable across all smartgrid strategies within the next generation of the Internet of Things and can be extended on national and global scales to address a range of target analytes, both chemical and biological.

  15. ABC of ladder operators for rationally extended quantum harmonic oscillator systems

    NASA Astrophysics Data System (ADS)

    Cariñena, José F.; Plyushchay, Mikhail S.

    2017-07-01

    The problem of construction of ladder operators for rationally extended quantum harmonic oscillator (REQHO) systems of a general form is investigated in the light of existence of different schemes of the Darboux-Crum-Krein-Adler transformations by which such systems can be generated from the quantum harmonic oscillator. Any REQHO system is characterized by the number of separated states in its spectrum, the number of ‘valence bands’ in which the separated states are organized, and by the total number of the missing energy levels and their position. All these peculiarities of a REQHO system are shown to be detected and reflected by a trinity (A^+/- , B^+/- , C^+/-) of the basic (primary) lowering and raising ladder operators related between themselves by certain algebraic identities with coefficients polynomially-dependent on the Hamiltonian. We show that all the secondary, higher-order ladder operators are obtainable by a composition of the basic ladder operators of the trinity which form the set of the spectrum-generating operators. Each trinity, in turn, can be constructed from the intertwining operators of the two complementary minimal schemes of the Darboux-Crum-Krein-Adler transformations.

  16. A new ultra-high-accuracy angle generator: current status and future direction

    NASA Astrophysics Data System (ADS)

    Guertin, Christian F.; Geckeler, Ralf D.

    2017-09-01

    Lack of an extreme high-accuracy angular positioning device available in the United States has left a gap in industrial and scientific efforts conducted there, requiring certain user groups to undertake time-consuming work with overseas laboratories. Specifically, in x-ray mirror metrology the global research community is advancing the state-of-the-art to unprecedented levels. We aim to fill this U.S. gap by developing a versatile high-accuracy angle generator as a part of the national metrology tool set for x-ray mirror metrology and other important industries. Using an established calibration technique to measure the errors of the encoder scale graduations for full-rotation rotary encoders, we implemented an optimized arrangement of sensors positioned to minimize propagation of calibration errors. Our initial feasibility research shows that upon scaling to a full prototype and including additional calibration techniques we can expect to achieve uncertainties at the level of 0.01 arcsec (50 nrad) or better and offer the immense advantage of a highly automatable and customizable product to the commercial market.

  17. Designing manufacturable filters for a 16-band plenoptic camera using differential evolution

    NASA Astrophysics Data System (ADS)

    Doster, Timothy; Olson, Colin C.; Fleet, Erin; Yetzbacher, Michael; Kanaev, Andrey; Lebow, Paul; Leathers, Robert

    2017-05-01

    A 16-band plenoptic camera allows for the rapid exchange of filter sets via a 4x4 filter array on the lens's front aperture. This ability to change out filters allows for an operator to quickly adapt to different locales or threat intelligence. Typically, such a system incorporates a default set of 16 equally spaced at-topped filters. Knowing the operating theater or the likely targets of interest it becomes advantageous to tune the filters. We propose using a modified beta distribution to parameterize the different possible filters and differential evolution (DE) to search over the space of possible filter designs. The modified beta distribution allows us to jointly optimize the width, taper and wavelength center of each single- or multi-pass filter in the set over a number of evolutionary steps. Further, by constraining the function parameters we can develop solutions which are not just theoretical but manufacturable. We examine two independent tasks: general spectral sensing and target detection. In the general spectral sensing task we utilize the theory of compressive sensing (CS) and find filters that generate codings which minimize the CS reconstruction error based on a fixed spectral dictionary of endmembers. For the target detection task and a set of known targets, we train the filters to optimize the separation of the background and target signature. We compare our results to the default 16 at-topped non-overlapping filter set which comes with the plenoptic camera and full hyperspectral resolution data which was previously acquired.

  18. Two Methods for Efficient Solution of the Hitting-Set Problem

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh; Fijany, Amir

    2005-01-01

    A paper addresses much of the same subject matter as that of Fast Algorithms for Model-Based Diagnosis (NPO-30582), which appears elsewhere in this issue of NASA Tech Briefs. However, in the paper, the emphasis is more on the hitting-set problem (also known as the transversal problem), which is well known among experts in combinatorics. The authors primary interest in the hitting-set problem lies in its connection to the diagnosis problem: it is a theorem of model-based diagnosis that in the set-theory representation of the components of a system, the minimal diagnoses of a system are the minimal hitting sets of the system. In the paper, the hitting-set problem (and, hence, the diagnosis problem) is translated from a combinatorial to a computational problem by mapping it onto the Boolean satisfiability and integer- programming problems. The paper goes on to describe developments nearly identical to those summarized in the cited companion NASA Tech Briefs article, including the utilization of Boolean-satisfiability and integer- programming techniques to reduce the computation time and/or memory needed to solve the hitting-set problem.

  19. Cylindric partitions, {{\\boldsymbol{ W }}}_{r} characters and the Andrews-Gordon-Bressoud identities

    NASA Astrophysics Data System (ADS)

    Foda, O.; Welsh, T. A.

    2016-04-01

    We study the Andrews-Gordon-Bressoud (AGB) generalisations of the Rogers-Ramanujan q-series identities in the context of cylindric partitions. We recall the definition of r-cylindric partitions, and provide a simple proof of Borodin’s product expression for their generating functions, that can be regarded as a limiting case of an unpublished proof by Krattenthaler. We also recall the relationships between the r-cylindric partition generating functions, the principal characters of {\\hat{{sl}}}r algebras, the {{\\boldsymbol{ M }}}r r,r+d minimal model characters of {{\\boldsymbol{ W }}}r algebras, and the r-string abaci generating functions, providing simple proofs for each. We then set r = 2, and use two-cylindric partitions to re-derive the AGB identities as follows. Firstly, we use Borodin’s product expression for the generating functions of the two-cylindric partitions with infinitely long parts, to obtain the product sides of the AGB identities, times a factor {(q;q)}∞ -1, which is the generating function of ordinary partitions. Next, we obtain a bijection from the two-cylindric partitions, via two-string abaci, into decorated versions of Bressoud’s restricted lattice paths. Extending Bressoud’s method of transforming between restricted paths that obey different restrictions, we obtain sum expressions with manifestly non-negative coefficients for the generating functions of the two-cylindric partitions which contains a factor {(q;q)}∞ -1. Equating the product and sum expressions of the same two-cylindric partitions, and canceling a factor of {(q;q)}∞ -1 on each side, we obtain the AGB identities.

  20. A design strategy for the use of vortex generators to manage inlet-engine distortion using computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.; Levy, Ralph

    1991-01-01

    A reduced Navier-Stokes solution technique was successfully used to design vortex generator installations for the purpose of minimizing engine face distortion by restructuring the development of secondary flow that is induced in typical 3-D curved inlet ducts. The results indicate that there exists an optimum axial location for this installation of corotating vortex generators, and within this configuration, there exists a maximum spacing between generator blades above which the engine face distortion increases rapidly. Installed vortex generator performance, as measured by engine face circumferential distortion descriptors, is sensitive to Reynolds number and thereby the generator scale, i.e., the ratio of generator blade height to local boundary layer thickness. Installations of corotating vortex generators work well in terms of minimizing engine face distortion within a limited range of generator scales. Hence, the design of vortex generator installations is a point design, and all other conditions are off design. In general, the loss levels associated with a properly designed vortex generator installation are very small; thus, they represent a very good method to manage engine face distortion. This study also showed that the vortex strength, generator scale, and secondary flow field structure have a complicated and interrelated influence over engine face distortion, over and above the influence of the initial arrangement of generators.

  1. How minimal executive feedback influences creative idea generation

    PubMed Central

    Camarda, Anaëlle; Agogué, Marine; Houdé, Olivier; Weil, Benoît; Le Masson, Pascal

    2017-01-01

    The fixation effect is known as one of the most dominant of the cognitive biases against creativity and limits individuals’ creative capacities in contexts of idea generation. Numerous techniques and tools have been established to help overcome these cognitive biases in various disciplines ranging from neuroscience to design sciences. Several works in the developmental cognitive sciences have discussed the importance of inhibitory control and have argued that individuals must first inhibit the spontaneous ideas that come to their mind so that they can generate creative solutions to problems. In line with the above discussions, in the present study, we performed an experiment on one hundred undergraduates from the Faculty of Psychology at Paris Descartes University, in which we investigated a minimal executive feedback-based learning process that helps individuals inhibit intuitive paths to solutions and then gradually drive their ideation paths toward creativity. Our results provide new insights into novel forms of creative leadership for idea generation. PMID:28662154

  2. How minimal executive feedback influences creative idea generation.

    PubMed

    Ezzat, Hicham; Camarda, Anaëlle; Cassotti, Mathieu; Agogué, Marine; Houdé, Olivier; Weil, Benoît; Le Masson, Pascal

    2017-01-01

    The fixation effect is known as one of the most dominant of the cognitive biases against creativity and limits individuals' creative capacities in contexts of idea generation. Numerous techniques and tools have been established to help overcome these cognitive biases in various disciplines ranging from neuroscience to design sciences. Several works in the developmental cognitive sciences have discussed the importance of inhibitory control and have argued that individuals must first inhibit the spontaneous ideas that come to their mind so that they can generate creative solutions to problems. In line with the above discussions, in the present study, we performed an experiment on one hundred undergraduates from the Faculty of Psychology at Paris Descartes University, in which we investigated a minimal executive feedback-based learning process that helps individuals inhibit intuitive paths to solutions and then gradually drive their ideation paths toward creativity. Our results provide new insights into novel forms of creative leadership for idea generation.

  3. An integrated genetic and physical map of the autosomal recessive polycystic kidney disease region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lens, X.M.; Onuchic, L.F.; Daoust, M.

    1997-05-01

    Autosomal recessive polycystic kidney disease is one of the most common hereditary renal cystic diseases in children. Genetic studies have recently assigned the only known locus for this disorder, PKHD1, to chromosome 6p21-p12. We have generated a YAC contig that spans {approximately}5 cM of this region, defined by the markers D6S1253-D6S295, and have mapped 43 sequence-tagged sites (STS) within this interval. This set includes 20 novel STSs, which define 12 unique positions in the region, and three ESTs. A minimal set of two YACs spans the segment D6S465-D6S466, which contains PKHD1, and estimates of their sizes based on information inmore » public databases suggest that the size of the critical region is <3.1 Mb. Twenty-eight STSs map to this interval, giving an average STS density of <1/150 kb. These resources will be useful for establishing a complete trancription map of the PKHD1 region. 10 refs., 1 fig., 1 tab.« less

  4. An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Allison, E-mail: lewis.allison10@gmail.com; Smith, Ralph; Williams, Brian

    For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is tomore » employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.« less

  5. MSL EDL Entry Guidance using the Entry Terminal Point Controller

    NASA Technical Reports Server (NTRS)

    2006-01-01

    The Mars Science Laboratory will be the first Mars mission to attempt a guided entry with the objective of safely delivering the entry vehicle to a survivable parachute deploy state within 10 km of the pre-designated landing site. The Entry Terminal Point Controller guidance algorithm is derived from the final phase Apollo Command Module guidance and, like Apollo, modulates the bank angle to control range based on deviations in range, altitude rate, and drag acceleration from a reference trajectory. For application to Mars landers which must make use of the tenuous Martian atmosphere, it is critical to balance the lift of the vehicle to minimize the range while still ensuring a safe deploy altitude. An overview of the process to generate optimized guidance settings is presented, discussing improvements made over the last four years. Performance tradeoffs between ellipse size and deploy altitude will be presented, along with imposed constraints of entry acceleration and heating. Performance sensitivities to the bank reversal deadbands, heading alignment, attitude initialization error, and atmospheric delivery errors are presented. Guidance settings for contingency operations, such as those appropriate for severe dust storm scenarios, are evaluated.

  6. Intellicount: High-Throughput Quantification of Fluorescent Synaptic Protein Puncta by Machine Learning

    PubMed Central

    Fantuzzo, J. A.; Mirabella, V. R.; Zahn, J. D.

    2017-01-01

    Abstract Synapse formation analyses can be performed by imaging and quantifying fluorescent signals of synaptic markers. Traditionally, these analyses are done using simple or multiple thresholding and segmentation approaches or by labor-intensive manual analysis by a human observer. Here, we describe Intellicount, a high-throughput, fully-automated synapse quantification program which applies a novel machine learning (ML)-based image processing algorithm to systematically improve region of interest (ROI) identification over simple thresholding techniques. Through processing large datasets from both human and mouse neurons, we demonstrate that this approach allows image processing to proceed independently of carefully set thresholds, thus reducing the need for human intervention. As a result, this method can efficiently and accurately process large image datasets with minimal interaction by the experimenter, making it less prone to bias and less liable to human error. Furthermore, Intellicount is integrated into an intuitive graphical user interface (GUI) that provides a set of valuable features, including automated and multifunctional figure generation, routine statistical analyses, and the ability to run full datasets through nested folders, greatly expediting the data analysis process. PMID:29218324

  7. A Hybrid Cellular Genetic Algorithm for Multi-objective Crew Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Jolai, Fariborz; Assadipour, Ghazal

    Crew scheduling is one of the important problems of the airline industry. This problem aims to cover a number of flights by crew members, such that all the flights are covered. In a robust scheduling the assignment should be so that the total cost, delays, and unbalanced utilization are minimized. As the problem is NP-hard and the objectives are in conflict with each other, a multi-objective meta-heuristic called CellDE, which is a hybrid cellular genetic algorithm, is implemented as the optimization method. The proposed algorithm provides the decision maker with a set of non-dominated or Pareto-optimal solutions, and enables them to choose the best one according to their preferences. A set of problems of different sizes is generated and solved using the proposed algorithm. Evaluating the performance of the proposed algorithm, three metrics are suggested, and the diversity and the convergence of the achieved Pareto front are appraised. Finally a comparison is made between CellDE and PAES, another meta-heuristic algorithm. The results show the superiority of CellDE.

  8. Blueprint XAS: a Matlab-based toolbox for the fitting and analysis of XAS spectra.

    PubMed

    Delgado-Jaime, Mario Ulises; Mewis, Craig Philip; Kennepohl, Pierre

    2010-01-01

    Blueprint XAS is a new Matlab-based program developed to fit and analyse X-ray absorption spectroscopy (XAS) data, most specifically in the near-edge region of the spectrum. The program is based on a methodology that introduces a novel background model into the complete fit model and that is capable of generating any number of independent fits with minimal introduction of user bias [Delgado-Jaime & Kennepohl (2010), J. Synchrotron Rad. 17, 119-128]. The functions and settings on the five panels of its graphical user interface are designed to suit the needs of near-edge XAS data analyzers. A batch function allows for the setting of multiple jobs to be run with Matlab in the background. A unique statistics panel allows the user to analyse a family of independent fits, to evaluate fit models and to draw statistically supported conclusions. The version introduced here (v0.2) is currently a toolbox for Matlab. Future stand-alone versions of the program will also incorporate several other new features to create a full package of tools for XAS data processing.

  9. AssignFit: a program for simultaneous assignment and structure refinement from solid-state NMR spectra

    PubMed Central

    Tian, Ye; Schwieters, Charles D.; Opella, Stanley J.; Marassi, Francesca M.

    2011-01-01

    AssignFit is a computer program developed within the XPLOR-NIH package for the assignment of dipolar coupling (DC) and chemical shift anisotropy (CSA) restraints derived from the solid-state NMR spectra of protein samples with uniaxial order. The method is based on minimizing the difference between experimentally observed solid-state NMR spectra and the frequencies back calculated from a structural model. Starting with a structural model and a set of DC and CSA restraints grouped only by amino acid type, as would be obtained by selective isotopic labeling, AssignFit generates all of the possible assignment permutations and calculates the corresponding atomic coordinates oriented in the alignment frame, together with the associated set of NMR frequencies, which are then compared with the experimental data for best fit. Incorporation of AssignFit in a simulated annealing refinement cycle provides an approach for simultaneous assignment and structure refinement (SASR) of proteins from solid-state NMR orientation restraints. The methods are demonstrated with data from two integral membrane proteins, one α-helical and one β-barrel, embedded in phospholipid bilayer membranes. PMID:22036904

  10. A quantum annealing approach for fault detection and diagnosis of graph-based systems

    NASA Astrophysics Data System (ADS)

    Perdomo-Ortiz, A.; Fluegemann, J.; Narasimhan, S.; Biswas, R.; Smelyanskiy, V. N.

    2015-02-01

    Diagnosing the minimal set of faults capable of explaining a set of given observations, e.g., from sensor readouts, is a hard combinatorial optimization problem usually tackled with artificial intelligence techniques. We present the mapping of this combinatorial problem to quadratic unconstrained binary optimization (QUBO), and the experimental results of instances embedded onto a quantum annealing device with 509 quantum bits. Besides being the first time a quantum approach has been proposed for problems in the advanced diagnostics community, to the best of our knowledge this work is also the first research utilizing the route Problem → QUBO → Direct embedding into quantum hardware, where we are able to implement and tackle problem instances with sizes that go beyond previously reported toy-model proof-of-principle quantum annealing implementations; this is a significant leap in the solution of problems via direct-embedding adiabatic quantum optimization. We discuss some of the programmability challenges in the current generation of the quantum device as well as a few possible ways to extend this work to more complex arbitrary network graphs.

  11. {sup 99m}Tc generators for clinical use based on zirconium molybdate gel and (n, gamma) produced {sup 99}Mo: Indian experience in the development and deployment of indigenous technology and processing facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saraswathy, P.; Dey, A.C.; Sarkar, S.K.

    The Indian pursuit of gel generator technology for {sup 99m}Tc was driven mainly by three considerations, namely, (i) well-established and ease of reliable production of (n, gamma)-based {sup 99}Mo in several tens of GBq quantities in the research reactors in Trombay/Mumbai, India, (ii) need for relatively low-cost alternate technology to replace the solvent (MEK) extraction generator system in use in India since 1970s and (iii) minimize dependency on weekly import of fission-produced {sup 99}Mo raw material required for alumina column generator. Extensive investigations on process standardisation for zirconium molybdate gel (ZMG) led to a steady progress, achieved both in termsmore » of process technology and final performance of {sup 99m}Tc gel generators. The {sup 99m}Tc final product purity from the Indian gel system was comparable to that obtained from the gold-standard alumina column generators. Based on the feasibility established for reliable small-scale production, as well as satisfactory clinical experience with a number of gel generators used in collaborating hospital radiopharmacies, full-fledged mechanised processing facilities for handling up to 150 g of ZMG were set up. The indigenous design and development included setting up of shielded plant facilities with pneumatic-driven as well as manual controls and special gadgets such as, microwave heating of the zirconium molybdate cake, dispenser for gel granules, loading of gel columns into pre-assembled generator housing etc. Formal review of the safety features was carried out by the regulatory body and stage-wise clearance for processing low and medium level {sup 99}Mo activity was granted. Starting from around 70 GBq {sup 99}Mo handling, the processing facilities have since been successfully operated at a level of 740 GBq {sup 99}Mo, twice a month. In all 18 batches of gel have been processed and 156 generators produced. The individual generator capacity was 15 to 30 GBq with an elution yield of nearly 75%. 129 generators were supplied to 11 user hospitals and the estimated number of clinical studies done is well over 5000. The salient aspects of the Indian experience have been reported in many a forum and shared with the IAEA through the on-going CRP. The detailed process know-how is available for technology transfer from BRIT, India. (author)« less

  12. The natural emergence of asymmetric tree-shaped pathways for cooling of a non-uniformly heated domain

    NASA Astrophysics Data System (ADS)

    Cetkin, Erdal; Oliani, Alessandro

    2015-07-01

    Here, we show that the peak temperature on a non-uniformly heated domain can be decreased by embedding a high-conductivity insert in it. The trunk of the high-conductivity insert is in contact with a heat sink. The heat is generated non-uniformly throughout the domain or concentrated in a square spot of length scale 0.1 L0, where L0 is the length scale of the non-uniformly heated domain. Peak and average temperatures are affected by the volume fraction of the high-conductivity material and by the shape of the high-conductivity pathways. This paper uncovers how varying the shape of the symmetric and asymmetric high-conductivity trees affects the overall thermal conductance of the heat generating domain. The tree-shaped high-conductivity inserts tend to grow toward where the heat generation is concentrated in order to minimize the peak temperature, i.e., in order to minimize the resistances to the heat flow. This behaviour of high-conductivity trees is alike with the root growth of the plants and trees. They also tend to grow towards sunlight, and their roots tend to grow towards water and nutrients. This paper uncovers the similarity between biological trees and high-conductivity trees, which is that trees should grow asymmetrically when the boundary conditions are non-uniform. We show here even though all the trees have the same objectives (minimum flow resistance), their shape should not be the same because of the variation in boundary conditions. To sum up, this paper shows that there is a high-conductivity tree design corresponding to minimum peak temperature with fixed constraints and conditions. This result is in accord with the constructal law which states that there should be an optimal design for a given set of conditions and constraints, and this design should be morphed in order to ensure minimum flow resistances as conditions and constraints change.

  13. Charge and energy minimization in electrical/magnetic stimulation of nervous tissue

    NASA Astrophysics Data System (ADS)

    Jezernik, Sašo; Sinkjaer, Thomas; Morari, Manfred

    2010-08-01

    In this work we address the problem of stimulating nervous tissue with the minimal necessary energy at reduced/minimal charge. Charge minimization is related to a valid safety concern (avoidance and reduction of stimulation-induced tissue and electrode damage). Energy minimization plays a role in battery-driven electrical or magnetic stimulation systems (increased lifetime, repetition rates, reduction of power requirements, thermal management). Extensive new theoretical results are derived by employing an optimal control theory framework. These results include derivation of the optimal electrical stimulation waveform for a mixed energy/charge minimization problem, derivation of the charge-balanced energy-minimal electrical stimulation waveform, solutions of a pure charge minimization problem with and without a constraint on the stimulation amplitude, and derivation of the energy-minimal magnetic stimulation waveform. Depending on the set stimulus pulse duration, energy and charge reductions of up to 80% are deemed possible. Results are verified in simulations with an active, mammalian-like nerve fiber model.

  14. Charge and energy minimization in electrical/magnetic stimulation of nervous tissue.

    PubMed

    Jezernik, Saso; Sinkjaer, Thomas; Morari, Manfred

    2010-08-01

    In this work we address the problem of stimulating nervous tissue with the minimal necessary energy at reduced/minimal charge. Charge minimization is related to a valid safety concern (avoidance and reduction of stimulation-induced tissue and electrode damage). Energy minimization plays a role in battery-driven electrical or magnetic stimulation systems (increased lifetime, repetition rates, reduction of power requirements, thermal management). Extensive new theoretical results are derived by employing an optimal control theory framework. These results include derivation of the optimal electrical stimulation waveform for a mixed energy/charge minimization problem, derivation of the charge-balanced energy-minimal electrical stimulation waveform, solutions of a pure charge minimization problem with and without a constraint on the stimulation amplitude, and derivation of the energy-minimal magnetic stimulation waveform. Depending on the set stimulus pulse duration, energy and charge reductions of up to 80% are deemed possible. Results are verified in simulations with an active, mammalian-like nerve fiber model.

  15. Evaluation of Practicing sustainable Industrial Solid Waste Minimization by Manufacturing Firms in Malaysia: Strengths and Weaknesses.

    PubMed

    Mallak, Shadi Kafi; Bakri Ishak, Mohd; Mohamed, Ahmad Fariz

    2016-09-13

    Malaysia is facing an increasing trend in industrial solid waste generation due to industrial development.Thus there is a paramount need in taking a serious action to move toward sustainable industrial waste management. The main aim of this study is to assess practicing solid waste minimization by manufacturing firms in Shah Alam industrial state, Malaysia. This paper presents a series of descriptive and inferential statistical analysis regarding the level and effects of practicing waste minimization methods, and seriousness of barriers preventing industries from practicing waste minimization methods. For this purpose the survey questions were designed such that both quantitative (questionnaire) and qualitative (semi-structures interview) data were collected concurrently. Analysis showed that, the majority of firms (92%) dispose their wastes rather than practice other sustainable waste management options. Also waste minimization methods such as segregation of wastes, on-site recycle and reuse, improve housekeeping and equipment modification were found to have significant contribution in waste reduction (p<0.05). Lack of expertise (M=3.50), lack of enough information (M= 3.54), lack of equipment modification (M= 3.16) and lack of specific waste minimization guidelines (M=3.49) have higher mean scores comparing with other barriers in different categories. These data were interpreted for elaborating of SWOT and TOWS matrix to highlight strengths, weaknesses, threats and opportunities. Accordingly, ten policies were recommended for improvement of practicing waste minimization by manufacturing firms as the main aim of this research. Implications This manuscript critically analysis waste minimization practices by manufacturing firms in Malaysia. Both qualitative and quantitative data collection and analysis were conducted to formulate SWOT and TOWS matrix in order to recommend policies and strategies for improvement of solid waste minimization by manufacturing industries. The results contribute to the knowledge and the findings of this study provide a useful baseline information and data on industrial solid waste generation and waste minimization practice.

  16. Evaluation of a School-Based Teen Obesity Prevention Minimal Intervention

    ERIC Educational Resources Information Center

    Abood, Doris A.; Black, David R.; Coster, Daniel C.

    2008-01-01

    Objective: A school-based nutrition education minimal intervention (MI) was evaluated. Design: The design was experimental, with random assignment at the school level. Setting: Seven schools were randomly assigned as experimental, and 7 as delayed-treatment. Participants: The experimental group included 551 teens, and the delayed treatment group…

  17. Introduction [Chapter 1

    Treesearch

    R. C. Musselman; D. G Fox; A. W. Schoettle; C. M. Regan

    1994-01-01

    Wilderness ecosystems in the United States are federally mandated and set aside by the Wilderness Act. They are managed to minimize human impact using methods that leave these systems, to the extent possible, in their natural state uninfluenced by manipulation or disruption by humans. Management often involves controlling or minimizing visual impact by enforcing strict...

  18. Majorization as a Tool for Optimizing a Class of Matrix Functions.

    ERIC Educational Resources Information Center

    Kiers, Henk A.

    1990-01-01

    General algorithms are presented that can be used for optimizing matrix trace functions subject to certain constraints on the parameters. The parameter set that minimizes the majorizing function also decreases the matrix trace function, providing a monotonically convergent algorithm for minimizing the matrix trace function iteratively. (SLD)

  19. Low-Cost Ultra-Wide Genotyping Using Roche/454 Pyrosequencing for Surveillance of HIV Drug Resistance

    PubMed Central

    Dudley, Dawn M.; Chin, Emily N.; Bimber, Benjamin N.; Sanabani, Sabri S.; Tarosso, Leandro F.; Costa, Priscilla R.; Sauer, Mariana M.; Kallas, Esper G.; O.’Connor, David H.

    2012-01-01

    Background Great efforts have been made to increase accessibility of HIV antiretroviral therapy (ART) in low and middle-income countries. The threat of wide-scale emergence of drug resistance could severely hamper ART scale-up efforts. Population-based surveillance of transmitted HIV drug resistance ensures the use of appropriate first-line regimens to maximize efficacy of ART programs where drug options are limited. However, traditional HIV genotyping is extremely expensive, providing a cost barrier to wide-scale and frequent HIV drug resistance surveillance. Methods/Results We have developed a low-cost laboratory-scale next-generation sequencing-based genotyping method to monitor drug resistance. We designed primers specifically to amplify protease and reverse transcriptase from Brazilian HIV subtypes and developed a multiplexing scheme using multiplex identifier tags to minimize cost while providing more robust data than traditional genotyping techniques. Using this approach, we characterized drug resistance from plasma in 81 HIV infected individuals collected in São Paulo, Brazil. We describe the complexities of analyzing next-generation sequencing data and present a simplified open-source workflow to analyze drug resistance data. From this data, we identified drug resistance mutations in 20% of treatment naïve individuals in our cohort, which is similar to frequencies identified using traditional genotyping in Brazilian patient samples. Conclusion The developed ultra-wide sequencing approach described here allows multiplexing of at least 48 patient samples per sequencing run, 4 times more than the current genotyping method. This method is also 4-fold more sensitive (5% minimal detection frequency vs. 20%) at a cost 3–5× less than the traditional Sanger-based genotyping method. Lastly, by using a benchtop next-generation sequencer (Roche/454 GS Junior), this approach can be more easily implemented in low-resource settings. This data provides proof-of-concept that next-generation HIV drug resistance genotyping is a feasible and low-cost alternative to current genotyping methods and may be particularly beneficial for in-country surveillance of transmitted drug resistance. PMID:22574170

  20. Improved Transient and Steady-State Performances of Series Resonant ZCS High-Frequency Inverter-Coupled Voltage Multiplier Converter with Dual Mode PFM Control Scheme

    NASA Astrophysics Data System (ADS)

    Chu, Enhui; Gamage, Laknath; Ishitobi, Manabu; Hiraki, Eiji; Nakaoka, Mutsuo

    The A variety of switched-mode high voltage DC power supplies using voltage-fed type or current-fed type high-frequency transformer resonant inverters using MOS gate bipolar power transistors; IGBTs have been recently developed so far for a medical-use X-ray high power generator. In general, the high voltage high power X-ray generator using voltage-fed high frequency inverter with a high voltage transformer link has to meet some performances such as (i) short rising period in start transient of X-ray tube voltage (ii) no overshoot transient response in tube voltage, (iii) minimized voltage ripple in periodic steady-state under extremely wide load variations and filament heater current fluctuation conditions of the X-ray tube. This paper presents two lossless inductor snubber-assisted series resonant zero current soft switching high-frequency inverter using a diode-capacitor ladder type voltage multiplier called Cockcroft-Walton circuit, which is effectively implemented for a high DC voltage X-ray power generator. This DC high voltage generator which incorporates pulse frequency modulated series resonant inverter using IGBT power module packages is based on the operation principle of zero current soft switching commutation scheme under discontinuous resonant current and continuous resonant current transition modes. This series capacitor compensated for transformer resonant power converter with a high frequency transformer linked voltage boost multiplier can efficiently work a novel selectively-changed dual mode PFM control scheme in order to improve the start transient and steady-state response characteristics and can completely achieve stable zero current soft switching commutation tube filament current dependent for wide load parameter setting values with the aid of two lossless inductor snubbers. It is proved on the basis of simulation and experimental results in which a simple and low cost control implementation based on selectively-changed dual-mode PFM for high-voltage X-ray DC-DC power converter with a voltage multiplier strategy has some specified voltage pattern tracking voltage response performances under rapid rising time and no overshoot in start transient tube voltage as well as the minimized steady-state voltage ripple in tube voltage.

  1. Speed and path control for conflict-free flight in high air traffic demand in terminal airspace

    NASA Astrophysics Data System (ADS)

    Rezaei, Ali

    To accommodate the growing air traffic demand, flights will need to be planned and navigated with a much higher level of precision than today's aircraft flight path. The Next Generation Air Transportation System (NextGen) stands to benefit significantly in safety and efficiency from such movement of aircraft along precisely defined paths. Air Traffic Operations (ATO) relying on such precision--the Precision Air Traffic Operations or PATO--are the foundation of high throughput capacity envisioned for the future airports. In PATO, the preferred method is to manage the air traffic by assigning a speed profile to each aircraft in a given fleet in a given airspace (in practice known as (speed control). In this research, an algorithm has been developed, set in the context of a Hybrid Control System (HCS) model, that determines whether a speed control solution exists for a given fleet of aircraft in a given airspace and if so, computes this solution as a collective speed profile that assures separation if executed without deviation. Uncertainties such as weather are not considered but the algorithm can be modified to include uncertainties. The algorithm first computes all feasible sequences (i.e., all sequences that allow the given fleet of aircraft to reach destinations without violating the FAA's separation requirement) by looking at all pairs of aircraft. Then, the most likely sequence is determined and the speed control solution is constructed by a backward trajectory generation, starting with the aircraft last out and proceeds to the first out. This computation can be done for different sequences in parallel which helps to reduce the computation time. If such a solution does not exist, then the algorithm calculates a minimal path modification (known as path control) that will allow separation-compliance speed control. We will also prove that the algorithm will modify the path without creating a new separation violation. The new path will be generated by adding new waypoints in the airspace. As a byproduct, instead of minimal path modification, one can use the aircraft arrival time schedule to generate the sequence in which the aircraft reach their destinations.

  2. Pricing health benefits: a cost-minimization approach.

    PubMed

    Miller, Nolan H

    2005-09-01

    We study the role of health benefits in an employer's compensation strategy, given the overall goal of minimizing total compensation cost (wages plus health-insurance cost). When employees' health status is private information, the employer's basic benefit package consists of a base wage and a moderate health plan, with a generous plan available for an additional charge. We show that in setting the charge for the generous plan, a cost-minimizing employer should act as a monopolist who sells "health plan upgrades" to its workers, and we discuss ways tax policy can encourage efficiency under cost-minimization and alternative pricing rules.

  3. Approximate solution of the p-median minimization problem

    NASA Astrophysics Data System (ADS)

    Il'ev, V. P.; Il'eva, S. D.; Navrotskaya, A. A.

    2016-09-01

    A version of the facility location problem (the well-known p-median minimization problem) and its generalization—the problem of minimizing a supermodular set function—is studied. These problems are NP-hard, and they are approximately solved by a gradient algorithm that is a discrete analog of the steepest descent algorithm. A priori bounds on the worst-case behavior of the gradient algorithm for the problems under consideration are obtained. As a consequence, a bound on the performance guarantee of the gradient algorithm for the p-median minimization problem in terms of the production and transportation cost matrix is obtained.

  4. Assessment of policy impacts on carbon capture and sequestration and bioenergy for U.S.' coal and natural gas power plants

    NASA Astrophysics Data System (ADS)

    Spokas, K.; Patrizio, P.; Leduc, S.; Mesfun, S.; Kraxner, F.

    2017-12-01

    Reducing electricity-sector emissions relies heavily on countries' abilities to either transition away from carbon-intensive energy generation or to sequester its resultant emissions with carbon capture and storage (CCS) technologies. The use of biomass energy technologies in conjunction with carbon capture and sequestration (BECCS) presents the opportunity for net reductions in atmospheric carbon dioxide. In this study, we investigate the limitations of several common policy mechanisms to incentivize the deployment of BECCS using the techno-economic spatial optimization model BeWhere (www.iiasa.ac.at/bewhere). We consider a set of coal and natural gas power plants in the United States (U.S.) selected using a screening process that considers capacity, boiler age, and capacity factor for electricity-generation units from the EPA 2014 eGRID database. The set makes up 470 GW of generation, and produces 8,400 PJ and 2.07 GtCO2 annually. Co-firing up to 15% for coal power plants is considered, using woody-biomass residues sourced from certified and managed U.S. forests obtained from the G4M (www.iiasa.ac.at/g4m) and GeoWiki (www.geo-wiki.org) database. Geologic storage is considered with injectivity and geomechanical limitations to ensure safe storage. Costs are minimized under two policy mechanisms: a carbon tax and geologic carbon sequestration credits, such as the Q45 credits. Results show that the carbon tax scenario incentivizes co-firing at low to medium carbon taxes, but is replaced by CCS at higher tax values. Carbon taxes do not strongly incentivize BECCS, as negative emissions associated with sequestering carbon content are not accounted as revenue. On the other hand, carbon credit scenarios result in significant CCS deployment, but lack any incentive for co-firing.

  5. Combined computational-experimental design of high temperature, high-intensity permanent magnetic alloys with minimal addition of rare-earth elements

    NASA Astrophysics Data System (ADS)

    Jha, Rajesh

    AlNiCo magnets are known for high-temperature stability and superior corrosion resistance and have been widely used for various applications. Reported magnetic energy density ((BH) max) for these magnets is around 10 MGOe. Theoretical calculations show that ((BH) max) of 20 MGOe is achievable which will be helpful in covering the gap between AlNiCo and Rare-Earth Elements (REE) based magnets. An extended family of AlNiCo alloys was studied in this dissertation that consists of eight elements, and hence it is important to determine composition-property relationship between each of the alloying elements and their influence on the bulk properties. In the present research, we proposed a novel approach to efficiently use a set of computational tools based on several concepts of artificial intelligence to address a complex problem of design and optimization of high temperature REE-free magnetic alloys. A multi-dimensional random number generation algorithm was used to generate the initial set of chemical concentrations. These alloys were then examined for phase equilibria and associated magnetic properties as a screening tool to form the initial set of alloy. These alloys were manufactured and tested for desired properties. These properties were fitted with a set of multi-dimensional response surfaces and the most accurate meta-models were chosen for prediction. These properties were simultaneously extremized by utilizing a set of multi-objective optimization algorithm. This provided a set of concentrations of each of the alloying elements for optimized properties. A few of the best predicted Pareto-optimal alloy compositions were then manufactured and tested to evaluate the predicted properties. These alloys were then added to the existing data set and used to improve the accuracy of meta-models. The multi-objective optimizer then used the new meta-models to find a new set of improved Pareto-optimized chemical concentrations. This design cycle was repeated twelve times in this work. Several of these Pareto-optimized alloys outperformed most of the candidate alloys on most of the objectives. Unsupervised learning methods such as Principal Component Analysis (PCA) and Heirarchical Cluster Analysis (HCA) were used to discover various patterns within the dataset. This proves the efficacy of the combined meta-modeling and experimental approach in design optimization of magnetic alloys.

  6. Adiabatic density perturbations and matter generation from the minimal supersymmetric standard model.

    PubMed

    Enqvist, Kari; Kasuya, Shinta; Mazumdar, Anupam

    2003-03-07

    We propose that the inflaton is coupled to ordinary matter only gravitationally and that it decays into a completely hidden sector. In this scenario both baryonic and dark matter originate from the decay of a flat direction of the minimal supersymmetric standard model, which is shown to generate the desired adiabatic perturbation spectrum via the curvaton mechanism. The requirement that the energy density along the flat direction dominates over the inflaton decay products fixes the flat direction almost uniquely. The present residual energy density in the hidden sector is typically shown to be small.

  7. Real-time stereo generation for surgical vision during minimal invasive robotic surgery

    NASA Astrophysics Data System (ADS)

    Laddi, Amit; Bhardwaj, Vijay; Mahapatra, Prasant; Pankaj, Dinesh; Kumar, Amod

    2016-03-01

    This paper proposes a framework for 3D surgical vision for minimal invasive robotic surgery. It presents an approach for generating the three dimensional view of the in-vivo live surgical procedures from two images captured by very small sized, full resolution camera sensor rig. A pre-processing scheme is employed to enhance the image quality and equalizing the color profile of two images. Polarized Projection using interlacing two images give a smooth and strain free three dimensional view. The algorithm runs in real time with good speed at full HD resolution.

  8. Procedures for minimizing the effects of high solar activity on satellite tracking and ephemeris generation

    NASA Technical Reports Server (NTRS)

    Bredvik, Gordon D.

    1990-01-01

    We are currently experiencing a period of high solar radiation combined with wide short-term fluctuations in the radiation. The short-term fluctuations, especially when combined with highly energetic solar flares, can adversely affect the mission of U.S. Space Command's Space Surveillance Center (SSC) which catalogs and tracks the satellites in orbit around the Earth. Rapidly increasing levels of solar electromagnetic and/or particle radiation (solar wind) causes atmospheric warming, which, in turn, causes the upper-most portions of the atmosphere to expand outward, into the regime of low altitude satellites. The increased drag on satellites from this expansion can cause large, unmodeled, in-track displacements, thus undermining the SSC's ability to track and predict satellite position. On 13 March 1989, high solar radiation levels, combined with a high-energy solar flare, caused an exceptional amount of short-term atmospheric warming. The SSC temporarily lost track of over 1300 low altitude satellites--nearly half of the low altitude satellite population. Observational data on satellites that became lost during the days following the 13 March 'solar event' was analyzed and compared with the satellites' last element set prior to the event (referred to as a geomagnetic storm because of the large increase in magnetic flux in the upper atmosphere). The analysis led to a set of procedures for reducing the impact of future geomagnetic storms. These procedures adjust selected software limit parameters in the differential correction of element sets and in the observation association process and must be manually initiated at the onset of a geomagnetic storm. Sensor tasking procedures must be adjusted to ensure that a minimum of four observations per day are received for low altitude satellites. These procedures have been implemented and, thus far, appear to be successful in minimizing the effect of subsequent geomagnetic storms on satellite tracking and ephemeris computation.

  9. 76 FR 10564 - Takes of Marine Mammals Incidental to Specified Activities; St. George Reef Light Station...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-25

    .... Acoustic and visual stimuli generated by: (1) Helicopter landings/takeoffs; (2) noise generated during... minimize acoustic and visual disturbances) as described in NMFS' December 22, 2010 (75 FR 80471) notice of... Activity on Marine Mammals Acoustic and visual stimuli generated by: (1) Helicopter landings/ takeoffs; (2...

  10. A game-based platform for crowd-sourcing biomedical image diagnosis and standardized remote training and education of diagnosticians

    NASA Astrophysics Data System (ADS)

    Feng, Steve; Woo, Minjae; Chandramouli, Krithika; Ozcan, Aydogan

    2015-03-01

    Over the past decade, crowd-sourcing complex image analysis tasks to a human crowd has emerged as an alternative to energy-inefficient and difficult-to-implement computational approaches. Following this trend, we have developed a mathematical framework for statistically combining human crowd-sourcing of biomedical image analysis and diagnosis through games. Using a web-based smart game (BioGames), we demonstrated this platform's effectiveness for telediagnosis of malaria from microscopic images of individual red blood cells (RBCs). After public release in early 2012 (http://biogames.ee.ucla.edu), more than 3000 gamers (experts and non-experts) used this BioGames platform to diagnose over 2800 distinct RBC images, marking them as positive (infected) or negative (non-infected). Furthermore, we asked expert diagnosticians to tag the same set of cells with labels of positive, negative, or questionable (insufficient information for a reliable diagnosis) and statistically combined their decisions to generate a gold standard malaria image library. Our framework utilized minimally trained gamers' diagnoses to generate a set of statistical labels with an accuracy that is within 98% of our gold standard image library, demonstrating the "wisdom of the crowd". Using the same image library, we have recently launched a web-based malaria training and educational game allowing diagnosticians to compare their performance with their peers. After diagnosing a set of ~500 cells per game, diagnosticians can compare their quantified scores against a leaderboard and view their misdiagnosed cells. Using this platform, we aim to expand our gold standard library with new RBC images and provide a quantified digital tool for measuring and improving diagnostician training globally.

  11. Optimized System Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Longman, Richard W.

    1999-01-01

    In system identification, one usually cares most about finding a model whose outputs are as close as possible to the true system outputs when the same input is applied to both. However, most system identification algorithms do not minimize this output error. Often they minimize model equation error instead, as in typical least-squares fits using a finite-difference model, and it is seen here that this distinction is significant. Here, we develop a set of system identification algorithms that minimize output error for multi-input/multi-output and multi-input/single-output systems. This is done with sequential quadratic programming iterations on the nonlinear least-squares problems, with an eigendecomposition to handle indefinite second partials. This optimization minimizes a nonlinear function of many variables, and hence can converge to local minima. To handle this problem, we start the iterations from the OKID (Observer/Kalman Identification) algorithm result. Not only has OKID proved very effective in practice, it minimizes an output error of an observer which has the property that as the data set gets large, it converges to minimizing the criterion of interest here. Hence, it is a particularly good starting point for the nonlinear iterations here. Examples show that the methods developed here eliminate the bias that is often observed using any system identification methods of either over-estimating or under-estimating the damping of vibration modes in lightly damped structures.

  12. Distance majorization and its applications.

    PubMed

    Chi, Eric C; Zhou, Hua; Lange, Kenneth

    2014-08-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.

  13. Optimizing Radiometric Fidelity to Enhance Aerial Image Change Detection Utilizing Digital Single Lens Reflex (DSLR) Cameras

    NASA Astrophysics Data System (ADS)

    Kerr, Andrew D.

    Determining optimal imaging settings and best practices related to the capture of aerial imagery using consumer-grade digital single lens reflex (DSLR) cameras, should enable remote sensing scientists to generate consistent, high quality, and low cost image data sets. Radiometric optimization, image fidelity, image capture consistency and repeatability were evaluated in the context of detailed image-based change detection. The impetus for this research is in part, a dearth of relevant, contemporary literature, on the utilization of consumer grade DSLR cameras for remote sensing, and the best practices associated with their use. The main radiometric control settings on a DSLR camera, EV (Exposure Value), WB (White Balance), light metering, ISO, and aperture (f-stop), are variables that were altered and controlled over the course of several image capture missions. These variables were compared for their effects on dynamic range, intra-frame brightness variation, visual acuity, temporal consistency, and the detectability of simulated cracks placed in the images. This testing was conducted from a terrestrial, rather than an airborne collection platform, due to the large number of images per collection, and the desire to minimize inter-image misregistration. The results point to a range of slightly underexposed image exposure values as preferable for change detection and noise minimization fidelity. The makeup of the scene, the sensor, and aerial platform, influence the selection of the aperture and shutter speed which along with other variables, allow for estimation of the apparent image motion (AIM) motion blur in the resulting images. The importance of the image edges in the image application, will in part dictate the lowest usable f-stop, and allow the user to select a more optimal shutter speed and ISO. The single most important camera capture variable is exposure bias (EV), with a full dynamic range, wide distribution of DN values, and high visual contrast and acuity occurring around -0.7 to -0.3EV exposure bias. The ideal values for sensor gain, was found to be ISO 100, with ISO 200 a less desirable. This study offers researchers a better understanding of the effects of camera capture settings on RSI pairs and their influence on image-based change detection.

  14. 2016 American College of Rheumatology/European League Against Rheumatism criteria for minimal, moderate, and major clinical response in adult dermatomyositis and polymyositis: An International Myositis Assessment and Clinical Studies Group/Paediatric Rheumatology International Trials Organisation Collaborative Initiative.

    PubMed

    Aggarwal, Rohit; Rider, Lisa G; Ruperto, Nicolino; Bayat, Nastaran; Erman, Brian; Feldman, Brian M; Oddis, Chester V; Amato, Anthony A; Chinoy, Hector; Cooper, Robert G; Dastmalchi, Maryam; Fiorentino, David; Isenberg, David; Katz, James D; Mammen, Andrew; de Visser, Marianne; Ytterberg, Steven R; Lundberg, Ingrid E; Chung, Lorinda; Danko, Katalin; García-De la Torre, Ignacio; Song, Yeong Wook; Villa, Luca; Rinaldi, Mariangela; Rockette, Howard; Lachenbruch, Peter A; Miller, Frederick W; Vencovsky, Jiri

    2017-05-01

    To develop response criteria for adult dermatomyositis (DM) and polymyositis (PM). Expert surveys, logistic regression, and conjoint analysis were used to develop 287 definitions using core set measures. Myositis experts rated greater improvement among multiple pairwise scenarios in conjoint analysis surveys, where different levels of improvement in 2 core set measures were presented. The PAPRIKA (Potentially All Pairwise Rankings of All Possible Alternatives) method determined the relative weights of core set measures and conjoint analysis definitions. The performance characteristics of the definitions were evaluated on patient profiles using expert consensus (gold standard) and were validated using data from a clinical trial. The nominal group technique was used to reach consensus. Consensus was reached for a conjoint analysis-based continuous model using absolute per cent change in core set measures (physician, patient, and extramuscular global activity, muscle strength, Health Assessment Questionnaire, and muscle enzyme levels). A total improvement score (range 0-100), determined by summing scores for each core set measure, was based on improvement in and relative weight of each core set measure. Thresholds for minimal, moderate, and major improvement were ≥20, ≥40, and ≥60 points in the total improvement score. The same criteria were chosen for juvenile DM, with different improvement thresholds. Sensitivity and specificity in DM/PM patient cohorts were 85% and 92%, 90% and 96%, and 92% and 98% for minimal, moderate, and major improvement, respectively. Definitions were validated in the clinical trial analysis for differentiating the physician rating of improvement (p<0.001). The response criteria for adult DM/PM consisted of the conjoint analysis model based on absolute per cent change in 6 core set measures, with thresholds for minimal, moderate, and major improvement. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  15. Detection of severe respiratory disease epidemic outbreaks by CUSUM-based overcrowd-severe-respiratory-disease-index model.

    PubMed

    Polanco, Carlos; Castañón-González, Jorge Alberto; Macías, Alejandro E; Samaniego, José Lino; Buhse, Thomas; Villanueva-Martínez, Sebastián

    2013-01-01

    A severe respiratory disease epidemic outbreak correlates with a high demand of specific supplies and specialized personnel to hold it back in a wide region or set of regions; these supplies would be beds, storage areas, hemodynamic monitors, and mechanical ventilators, as well as physicians, respiratory technicians, and specialized nurses. We describe an online cumulative sum based model named Overcrowd-Severe-Respiratory-Disease-Index based on the Modified Overcrowd Index that simultaneously monitors and informs the demand of those supplies and personnel in a healthcare network generating early warnings of severe respiratory disease epidemic outbreaks through the interpretation of such variables. A post hoc historical archive is generated, helping physicians in charge to improve the transit and future allocation of supplies in the entire hospital network during the outbreak. The model was thoroughly verified in a virtual scenario, generating multiple epidemic outbreaks in a 6-year span for a 13-hospital network. When it was superimposed over the H1N1 influenza outbreak census (2008-2010) taken by the National Institute of Medical Sciences and Nutrition Salvador Zubiran in Mexico City, it showed that it is an effective algorithm to notify early warnings of severe respiratory disease epidemic outbreaks with a minimal rate of false alerts.

  16. Generation of light-sheet at the end of multimode fibre (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Plöschner, Martin; Kollárová, Véra; Dostál, Zbyněk.; Nylk, Jonathan; Barton-Owen, Thomas; Ferrier, David E. K.; Chmelik, Radim; Dholakia, Kishan; Cizmár, TomáÅ.¡

    2017-02-01

    Light-sheet fluorescence microscopy is quickly becoming one of the cornerstone imaging techniques in biology as it provides rapid, three-dimensional sectioning of specimens at minimal levels of phototoxicity. It is very appealing to bring this unique combination of imaging properties into an endoscopic setting and be able to perform optical sectioning deep in tissues. Current endoscopic approaches for delivery of light-sheet illumination are based on single-mode optical fibre terminated by cylindrical gradient index lens. Such configuration generates a light-sheet plane that is axially fixed and a mechanical movement of either the sample or the endoscope is required to acquire three-dimensional information about the sample. Furthermore, the axial resolution of this technique is limited to 5um. The delivery of the light-sheet through the multimode fibre provides better axial resolution limited only by its numerical aperture, the light-sheet is scanned holographically without any mechanical movement, and multiple advanced light-sheet imaging modalities, such as Bessel and structured illumination Bessel beam, are intrinsically supported by the system due to the cylindrical symmetry of the fibre. We discuss the holographic techniques for generation of multiple light-sheet types and demonstrate the imaging on a sample of fluorescent beads fixed in agarose gel, as well as on a biological sample of Spirobranchus Lamarcki.

  17. Development of Three-Dimensional DRAGON Grid Technology

    NASA Technical Reports Server (NTRS)

    Zheng, Yao; Kiou, Meng-Sing; Civinskas, Kestutis C.

    1999-01-01

    For a typical three dimensional flow in a practical engineering device, the time spent in grid generation can take 70 percent of the total analysis effort, resulting in a serious bottleneck in the design/analysis cycle. The present research attempts to develop a procedure that can considerably reduce the grid generation effort. The DRAGON grid, as a hybrid grid, is created by means of a Direct Replacement of Arbitrary Grid Overlapping by Nonstructured grid. The DRAGON grid scheme is an adaptation to the Chimera thinking. The Chimera grid is a composite structured grid, composing a set of overlapped structured grids, which are independently generated and body-fitted. The grid is of high quality and amenable for efficient solution schemes. However, the interpolation used in the overlapped region between grids introduces error, especially when a sharp-gradient region is encountered. The DRAGON grid scheme is capable of completely eliminating the interpolation and preserving the conservation property. It maximizes the advantages of the Chimera scheme and adapts the strengths of the unstructured and while at the same time keeping its weaknesses minimal. In the present paper, we describe the progress towards extending the DRAGON grid technology into three dimensions. Essential and programming aspects of the extension, and new challenges for the three-dimensional cases, are addressed.

  18. Aeroacoustic Simulations of a Nose Landing Gear with FUN3D: A Grid Refinement Study

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Lockard, David P.

    2017-01-01

    A systematic grid refinement study is presented for numerical simulations of a partially-dressed, cavity-closed (PDCC) nose landing gear configuration that was tested in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D is used to compute the unsteady flow field for this configuration. Mixed-element grids generated using the Pointwise (Registered Trademark) grid generation software are used for numerical simulations. Particular care is taken to ensure quality cells and proper resolution in critical areas of interest in an effort to minimize errors introduced by numerical artifacts. A set of grids was generated in this manner to create a family of uniformly refined grids. The finest grid was then modified to coarsen the wall-normal spacing to create a grid suitable for the wall-function implementation in FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence modeling approach is used for these simulations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. These CFD solutions are used as input to a FfowcsWilliams-Hawkings (FW-H) noise propagation code to compute the farfield noise levels. The agreement of the computed results with the experimental data improves as the grid is refined.

  19. Detection of Severe Respiratory Disease Epidemic Outbreaks by CUSUM-Based Overcrowd-Severe-Respiratory-Disease-Index Model

    PubMed Central

    Castañón-González, Jorge Alberto; Macías, Alejandro E.; Samaniego, José Lino; Buhse, Thomas; Villanueva-Martínez, Sebastián

    2013-01-01

    A severe respiratory disease epidemic outbreak correlates with a high demand of specific supplies and specialized personnel to hold it back in a wide region or set of regions; these supplies would be beds, storage areas, hemodynamic monitors, and mechanical ventilators, as well as physicians, respiratory technicians, and specialized nurses. We describe an online cumulative sum based model named Overcrowd-Severe-Respiratory-Disease-Index based on the Modified Overcrowd Index that simultaneously monitors and informs the demand of those supplies and personnel in a healthcare network generating early warnings of severe respiratory disease epidemic outbreaks through the interpretation of such variables. A post hoc historical archive is generated, helping physicians in charge to improve the transit and future allocation of supplies in the entire hospital network during the outbreak. The model was thoroughly verified in a virtual scenario, generating multiple epidemic outbreaks in a 6-year span for a 13-hospital network. When it was superimposed over the H1N1 influenza outbreak census (2008–2010) taken by the National Institute of Medical Sciences and Nutrition Salvador Zubiran in Mexico City, it showed that it is an effective algorithm to notify early warnings of severe respiratory disease epidemic outbreaks with a minimal rate of false alerts. PMID:24069063

  20. Minimalism and Beyond: Second Language Acquisition for the Twenty-First Century.

    ERIC Educational Resources Information Center

    Balcom, Patricia A.

    2001-01-01

    Provides a general overview of two books--"The Second Time Around: Minimalism and Second Language Acquisition" and "Second Language Syntax: A Generative Introduction--and shows how the respond to key issues in second language acquisition, including the process of second language acquisition, access to universal grammar, the role of…

Top