Sample records for require large numbers

  1. TES Detector Noise Limited Readout Using SQUID Multiplexers

    NASA Technical Reports Server (NTRS)

    Staguhn, J. G.; Benford, D. J.; Chervenak, J. A.; Khan, S. A.; Moseley, S. H.; Shafer, R. A.; Deiker, S.; Grossman, E. N.; Hilton, G. C.; Irwin, K. D.

    2004-01-01

    The availability of superconducting Transition Edge Sensors (TES) with large numbers of individual detector pixels requires multiplexers for efficient readout. The use of multiplexers reduces the number of wires needed between the cryogenic electronics and the room temperature electronics and cuts the number of required cryogenic amplifiers. We are using an 8 channel SQUID multiplexer to read out one-dimensional TES arrays which are used for submillimeter astronomical observations. We present results from test measurements which show that the low noise level of the SQUID multiplexers allows accurate measurements of the TES Johnson noise, and that in operation, the readout noise is dominated by the detector noise. Multiplexers for large number of channels require a large bandwidth for the multiplexed readout signal. We discuss the resulting implications for the noise performance of these multiplexers which will be used for the readout of two dimensional TES arrays in next generation instruments.

  2. 77 FR 60133 - Agency Information Collection Activities: Deferral of Duty on Large Yachts Imported for Sale

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-02

    ... Activities: Deferral of Duty on Large Yachts Imported for Sale AGENCY: U.S. Customs and Border Protection... collection requirement concerning Deferral of Duty on Large Yachts Imported for Sale. This request for...: Title: Deferral of Duty on Large Yachts Imported for Sale. OMB Number: 1651-0080. Form Number: None...

  3. Massive data compression for parameter-dependent covariance matrices

    NASA Astrophysics Data System (ADS)

    Heavens, Alan F.; Sellentin, Elena; de Mijolla, Damien; Vianello, Alvise

    2017-12-01

    We show how the massive data compression algorithm MOPED can be used to reduce, by orders of magnitude, the number of simulated data sets which are required to estimate the covariance matrix required for the analysis of Gaussian-distributed data. This is relevant when the covariance matrix cannot be calculated directly. The compression is especially valuable when the covariance matrix varies with the model parameters. In this case, it may be prohibitively expensive to run enough simulations to estimate the full covariance matrix throughout the parameter space. This compression may be particularly valuable for the next generation of weak lensing surveys, such as proposed for Euclid and Large Synoptic Survey Telescope, for which the number of summary data (such as band power or shear correlation estimates) is very large, ∼104, due to the large number of tomographic redshift bins which the data will be divided into. In the pessimistic case where the covariance matrix is estimated separately for all points in an Monte Carlo Markov Chain analysis, this may require an unfeasible 109 simulations. We show here that MOPED can reduce this number by a factor of 1000, or a factor of ∼106 if some regularity in the covariance matrix is assumed, reducing the number of simulations required to a manageable 103, making an otherwise intractable analysis feasible.

  4. Identifiability of conservative linear mechanical systems. [applied to large flexible spacecraft structures

    NASA Technical Reports Server (NTRS)

    Sirlin, S. W.; Longman, R. W.; Juang, J. N.

    1985-01-01

    With a sufficiently great number of sensors and actuators, any finite dimensional dynamic system is identifiable on the basis of input-output data. It is presently indicated that, for conservative nongyroscopic linear mechanical systems, the number of sensors and actuators required for identifiability is very large, where 'identifiability' is understood as a unique determination of the mass and stiffness matrices. The required number of sensors and actuators drops by a factor of two, given a relaxation of the identifiability criterion so that identification can fail only if the system parameters being identified lie in a set of measure zero. When the mass matrix is known a priori, this additional information does not significantly affect the requirements for guaranteed identifiability, though the number of parameters to be determined is reduced by a factor of two.

  5. Explicit solution techniques for impact with contact constraints

    NASA Technical Reports Server (NTRS)

    Mccarty, Robert E.

    1993-01-01

    Modern military aircraft transparency systems, windshields and canopies, are complex systems which must meet a large and rapidly growing number of requirements. Many of these transparency system requirements are conflicting, presenting difficult balances which must be achieved. One example of a challenging requirements balance or trade is shaping for stealth versus aircrew vision. The large number of requirements involved may be grouped in a variety of areas including man-machine interface; structural integration with the airframe; combat hazards; environmental exposures; and supportability. Some individual requirements by themselves pose very difficult, severely nonlinear analysis problems. One such complex problem is that associated with the dynamic structural response resulting from high energy bird impact. An improved analytical capability for soft-body impact simulation was developed.

  6. Explicit solution techniques for impact with contact constraints

    NASA Astrophysics Data System (ADS)

    McCarty, Robert E.

    1993-08-01

    Modern military aircraft transparency systems, windshields and canopies, are complex systems which must meet a large and rapidly growing number of requirements. Many of these transparency system requirements are conflicting, presenting difficult balances which must be achieved. One example of a challenging requirements balance or trade is shaping for stealth versus aircrew vision. The large number of requirements involved may be grouped in a variety of areas including man-machine interface; structural integration with the airframe; combat hazards; environmental exposures; and supportability. Some individual requirements by themselves pose very difficult, severely nonlinear analysis problems. One such complex problem is that associated with the dynamic structural response resulting from high energy bird impact. An improved analytical capability for soft-body impact simulation was developed.

  7. Electrofishing Effort Required to Estimate Biotic Condition in Southern Idaho Rivers

    EPA Science Inventory

    An important issue surrounding biomonitoring in large rivers is the minimum sampling effort required to collect an adequate number of fish for accurate and precise determinations of biotic condition. During the summer of 2002, we sampled 15 randomly selected large-river sites in...

  8. Evaluation of Genetic Algorithm Concepts using Model Problems. Part 1; Single-Objective Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A genetic-algorithm-based optimization approach is described and evaluated using a simple hill-climbing model problem. The model problem utilized herein allows for the broad specification of a large number of search spaces including spaces with an arbitrary number of genes or decision variables and an arbitrary number hills or modes. In the present study, only single objective problems are considered. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all problems attempted. The most difficult problems - those with large hyper-volumes and multi-mode search spaces containing a large number of genes - require a large number of function evaluations for GA convergence, but they always converge.

  9. Using a novel flood prediction model and GIS automation to measure the valley and channel morphology of large river networks

    EPA Science Inventory

    Traditional methods for measuring river valley and channel morphology require intensive ground-based surveys which are often expensive, time consuming, and logistically difficult to implement. The number of surveys required to assess the hydrogeomorphic structure of large river n...

  10. Estimation of the rain signal in the presence of large surface clutter

    NASA Technical Reports Server (NTRS)

    Ahamad, Atiq; Moore, Richard K.

    1994-01-01

    The principal limitation for the use of a spaceborne imaging SAR as a rain radar is the surface-clutter problem. Signals may be estimated in the presence of noise by averaging large numbers of independent samples. This method was applied to obtain an estimate of the rain echo by averaging a set of N(sub c) samples of the clutter in a separate measurement and subtracting the clutter estimate from the combined estimate. The number of samples required for successful estimation (within 10-20%) for off-vertical angles of incidence appears to be prohibitively large. However, by appropriately degrading the resolution in both range and azimuth, the required number of samples can be obtained. For vertical incidence, the number of samples required for successful estimation is reasonable. In estimating the clutter it was assumed that the surface echo is the same outside the rain volume as it is within the rain volume. This may be true for the forest echo, but for convective storms over the ocean the surface echo outside the rain volume is very different from that within. It is suggested that the experiment be performed with vertical incidence over forest to overcome this limitation.

  11. Program Design for Retrospective Searches on Large Data Bases

    ERIC Educational Resources Information Center

    Thiel, L. H.; Heaps, H. S.

    1972-01-01

    Retrospective search of large data bases requires development of special techniques for automatic compression of data and minimization of the number of input-output operations to the computer files. The computer program should require a relatively small amount of internal memory. This paper describes the structure of such a program. (9 references)…

  12. Large numbers hypothesis. II - Electromagnetic radiation

    NASA Technical Reports Server (NTRS)

    Adams, P. J.

    1983-01-01

    This paper develops the theory of electromagnetic radiation in the units covariant formalism incorporating Dirac's large numbers hypothesis (LNH). A direct field-to-particle technique is used to obtain the photon propagation equation which explicitly involves the photon replication rate. This replication rate is fixed uniquely by requiring that the form of a free-photon distribution function be preserved, as required by the 2.7 K cosmic radiation. One finds that with this particular photon replication rate the units covariant formalism developed in Paper I actually predicts that the ratio of photon number to proton number in the universe varies as t to the 1/4, precisely in accord with LNH. The cosmological red-shift law is also derived and it is shown to differ considerably from the standard form of (nu)(R) - const.

  13. Spiking neural network simulation: memory-optimal synaptic event scheduling.

    PubMed

    Stewart, Robert D; Gurney, Kevin N

    2011-06-01

    Spiking neural network simulations incorporating variable transmission delays require synaptic events to be scheduled prior to delivery. Conventional methods have memory requirements that scale with the total number of synapses in a network. We introduce novel scheduling algorithms for both discrete and continuous event delivery, where the memory requirement scales instead with the number of neurons. Superior algorithmic performance is demonstrated using large-scale, benchmarking network simulations.

  14. Procedures and equipment for staining large numbers of plant root samples for endomycorrhizal assay.

    PubMed

    Kormanik, P P; Bryan, W C; Schultz, R C

    1980-04-01

    A simplified method of clearing and staining large numbers of plant roots for vesicular-arbuscular (VA) mycorrhizal assay is presented. Equipment needed for handling multiple samples is described, and two formulations for the different chemical solutions are presented. Because one formulation contains phenol, its use should be limited to basic studies for which adequate laboratory exhaust hoods are available and great clarity of fungal structures is required. The second staining formulation, utilizing lactic acid instead of phenol, is less toxic, requires less elaborate laboratory facilities, and has proven to be completely satisfactory for VA assays.

  15. A comparison between IMSC, PI and MIMSC methods in controlling the vibration of flexible systems

    NASA Technical Reports Server (NTRS)

    Baz, A.; Poh, S.

    1987-01-01

    A comparative study is presented between three active control algorithms which have proven to be successful in controlling the vibrations of large flexible systems. These algorithms are: the Independent Modal Space Control (IMSC), the Pseudo-inverse (PI), and the Modified Independent Modal Space Control (MIMSC). Emphasis is placed on demonstrating the effectiveness of the MIMSC method in controlling the vibration of large systems with small number of actuators by using an efficient time sharing strategy. Such a strategy favors the MIMSC over the IMSC method, which requires a large number of actuators to control equal number of modes, and also over the PI method which attempts to control large number of modes with smaller number of actuators through the use of an in-exact statistical realization of a modal controller. Numerical examples are presented to illustrate the main features of the three algorithms and the merits of the MIMSC method.

  16. Finding Cardinality Heavy-Hitters in Massive Traffic Data and Its Application to Anomaly Detection

    NASA Astrophysics Data System (ADS)

    Ishibashi, Keisuke; Mori, Tatsuya; Kawahara, Ryoichi; Hirokawa, Yutaka; Kobayashi, Atsushi; Yamamoto, Kimihiro; Sakamoto, Hitoaki; Asano, Shoichiro

    We propose an algorithm for finding heavy hitters in terms of cardinality (the number of distinct items in a set) in massive traffic data using a small amount of memory. Examples of such cardinality heavy-hitters are hosts that send large numbers of flows, or hosts that communicate with large numbers of other hosts. Finding these hosts is crucial to the provision of good communication quality because they significantly affect the communications of other hosts via either malicious activities such as worm scans, spam distribution, or botnet control or normal activities such as being a member of a flash crowd or performing peer-to-peer (P2P) communication. To precisely determine the cardinality of a host we need tables of previously seen items for each host (e. g., flow tables for every host) and this may infeasible for a high-speed environment with a massive amount of traffic. In this paper, we use a cardinality estimation algorithm that does not require these tables but needs only a little information called the cardinality summary. This is made possible by relaxing the goal from exact counting to estimation of cardinality. In addition, we propose an algorithm that does not need to maintain the cardinality summary for each host, but only for partitioned addresses of a host. As a result, the required number of tables can be significantly decreased. We evaluated our algorithm using actual backbone traffic data to find the heavy-hitters in the number of flows and estimate the number of these flows. We found that while the accuracy degraded when estimating for hosts with few flows, the algorithm could accurately find the top-100 hosts in terms of the number of flows using a limited-sized memory. In addition, we found that the number of tables required to achieve a pre-defined accuracy increased logarithmically with respect to the total number of hosts, which indicates that our method is applicable for large traffic data for a very large number of hosts. We also introduce an application of our algorithm to anomaly detection. With actual traffic data, our method could successfully detect a sudden network scan.

  17. FFTFIL; a filtering program based on two-dimensional Fourier analysis of geophysical data

    USGS Publications Warehouse

    Hildenbrand, T.G.

    1983-01-01

    The filtering program 'fftfil' performs a variety of operations commonly required in geophysical studies of gravity, magnetic, and terrain data. Filtering operations are carried out in the wave number domain where the Fourier coefficients of the input data are multiplied by the response of the selected filter. Input grids can be large (2=number of rows or columns=1024) and are not required to have numbers of rows and columns equal to powers of two.

  18. Extended Deterrence and Allied Assurance: Key Concepts and Current Challenges for U.S. Policy

    DTIC Science & Technology

    2013-09-01

    include adversary nuclear forces and stockpiles) in the pre- satellite era required a strategy using large numbers of bombers, large numbers of...radar and sensor capabilities related to TMD, activities Canberra considered important to “bolstering 59 the [U.S.-Australia] alliance.” 146 In...external attack; with potential adversaries developing anti- satellite 90 capabilities, and conducting cyber incursions and attacks against U.S. and

  19. For Mole Problems, Call Avogadro: 602-1023.

    ERIC Educational Resources Information Center

    Uthe, R. E.

    2002-01-01

    Describes techniques to help introductory students become familiar with Avogadro's number and mole calculations. Techniques involve estimating numbers of common objects then calculating the length of time needed to count large numbers of them. For example, the immense amount of time required to count a mole of sand grains at one grain per second…

  20. Improved microseismic event locations through large-N arrays and wave-equation imaging and inversion

    NASA Astrophysics Data System (ADS)

    Witten, B.; Shragge, J. C.

    2016-12-01

    The recent increased focus on small-scale seismicity, Mw < 4 has come about primarily for two reasons. First, there is an increase in induced seismicity related to injection operations primarily for wastewater disposal and hydraulic fracturing for oil and gas recovery and for geothermal energy production. While the seismicity associated with injection is sometimes felt, it is more often weak. Some weak events are detected on current sparse arrays; however, accurate location of the events often requires a larger number of (multi-component) sensors. This leads to the second reason for an increased focus on small magnitude seismicity: a greater number of seismometers are being deployed in large N-arrays. The greater number of sensors decreases the detection threshold and therefore significantly increases the number of weak events found. Overall, these two factors bring new challenges and opportunities. Many standard seismological location and inversion techniques are geared toward large, easily identifiable events recorded on a sparse number of stations. However, with large-N arrays we can detect small events by utilizing multi-trace processing techniques, and increased processing power equips us with tools that employ more complete physics for simultaneously locating events and inverting for P- and S-wave velocity structure. We present a method that uses large-N arrays and wave-equation-based imaging and inversion to jointly locate earthquakes and estimate the elastic velocities of the earth. The technique requires no picking and is thus suitable for weak events. We validate the methodology through synthetic and field data examples.

  1. Gas-Centered Swirl Coaxial Liquid Injector Evaluations

    NASA Technical Reports Server (NTRS)

    Cohn, A. K.; Strakey, P. A.; Talley, D. G.

    2005-01-01

    Development of Liquid Rocket Engines is expensive. Extensive testing at large scales usually required. In order to verify engine lifetime, large number of tests required. Limited Resources available for development. Sub-scale cold-flow and hot-fire testing is extremely cost effective. Could be a necessary (but not sufficient) condition for long engine lifetime. Reduces overall costs and risk of large scale testing. Goal: Determine knowledge that can be gained from sub-scale cold-flow and hot-fire evaluations of LRE injectors. Determine relationships between cold-flow and hot-fire data.

  2. High Throughput Screening of Toxicity Pathways Perturbed by Environmental Chemicals

    EPA Science Inventory

    Toxicology, a field largely unchanged over the past several decades, is undergoing a significant transformation driven by a number of forces – the increasing number of chemicals needing assessment, changing legal requirements, advances in biology and computer science, and concern...

  3. The Earth Phenomena Observing System: Intelligent Autonomy for Satellite Operations

    NASA Technical Reports Server (NTRS)

    Ricard, Michael; Abramson, Mark; Carter, David; Kolitz, Stephan

    2003-01-01

    Earth monitoring systems of the future may include large numbers of inexpensive small satellites, tasked in a coordinated fashion to observe both long term and transient targets. For best performance, a tool which helps operators optimally assign targets to satellites will be required. We present the design of algorithms developed for real-time optimized autonomous planning of large numbers of small single-sensor Earth observation satellites. The algorithms will reduce requirements on the human operators of such a system of satellites, ensure good utilization of system resources, and provide the capability to dynamically respond to temporal terrestrial phenomena. Our initial real-time system model consists of approximately 100 satellites and large number of points of interest on Earth (e.g., hurricanes, volcanoes, and forest fires) with the objective to maximize the total science value of observations over time. Several options for calculating the science value of observations include the following: 1) total observation time, 2) number of observations, and the 3) quality (a function of e.g., sensor type, range, slant angle) of the observations. An integrated approach using integer programming, optimization and astrodynamics is used to calculate optimized observation and sensor tasking plans.

  4. Identification of linearised RMS-voltage dip patterns based on clustering in renewable plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    García-Sánchez, Tania; Gómez-Lázaro, Emilio; Muljadi, Edward

    Generation units connected to the grid are currently required to meet low-voltage ride-through (LVRT) requirements. In most developed countries, these requirements also apply to renewable sources, mainly wind power plants and photovoltaic installations connected to the grid. This study proposes an alternative characterisation solution to classify and visualise a large number of collected events in light of current limits and requirements. The authors' approach is based on linearised root-mean-square-(RMS)-voltage trajectories, taking into account LRVT requirements, and a clustering process to identify the most likely pattern trajectories. The proposed solution gives extensive information on an event's severity by providing a simplemore » but complete visualisation of the linearised RMS-voltage patterns. In addition, these patterns are compared to current LVRT requirements to determine similarities or discrepancies. A large number of collected events can then be automatically classified and visualised for comparative purposes. Real disturbances collected from renewable sources in Spain are used to assess the proposed solution. Extensive results and discussions are also included in this study.« less

  5. Children's Intuitive Sense of Number Develops Independently of Their Perception of Area, Density, Length, and Time

    ERIC Educational Resources Information Center

    Odic, Darko

    2018-01-01

    Young children can quickly and intuitively represent the number of objects in a visual scene through the Approximate Number System (ANS). The precision of the ANS--indexed as the most difficult ratio of two numbers that children can reliably discriminate--is well known to improve with development: whereas infants require relatively large ratios to…

  6. Post-Attack Economic Stabilization Issues for Federal, State, and Local Governments

    DTIC Science & Technology

    1985-02-01

    workers being transfered from large urban areas to production facilities in areas of lower risk . In another case, rent control staff should be quickly...food supermarkets , which do not universally accept bank cards. 3 0 A requirement will still exist for a large number of credit cards. While there is some...separate system is required for rationing. For example, the increasingly popular automatic teller machine ( ATM ) debit card routinely accesses both a

  7. Optimizing the scale of markets for water quality trading

    NASA Astrophysics Data System (ADS)

    Doyle, Martin W.; Patterson, Lauren A.; Chen, Yanyou; Schnier, Kurt E.; Yates, Andrew J.

    2014-09-01

    Applying market approaches to environmental regulations requires establishing a spatial scale for trading. Spatially large markets usually increase opportunities for abatement cost savings but increase the potential for pollution damages (hot spots), vice versa for spatially small markets. We develop a coupled hydrologic-economic modeling approach for application to point source emissions trading by a large number of sources and apply this approach to the wastewater treatment plants (WWTPs) within the watershed of the second largest estuary in the U.S. We consider two different administrative structures that govern the trade of emission permits: one-for-one trading (the number of permits required for each unit of emission is the same for every WWTP) and trading ratios (the number of permits required for each unit of emissions varies across WWTP). Results show that water quality regulators should allow trading to occur at the river basin scale as an appropriate first-step policy, as is being done in a limited number of cases via compliance associations. Larger spatial scales may be needed under conditions of increased abatement costs. The optimal scale of the market is generally the same regardless of whether one-for-one trading or trading ratios are employed.

  8. Theory and computation of optimal low- and medium-thrust transfers

    NASA Technical Reports Server (NTRS)

    Chuang, C.-H.

    1994-01-01

    This report presents two numerical methods considered for the computation of fuel-optimal, low-thrust orbit transfers in large numbers of burns. The origins of these methods are observations made with the extremal solutions of transfers in small numbers of burns; there seems to exist a trend such that the longer the time allowed to perform an optimal transfer the less fuel that is used. These longer transfers are obviously of interest since they require a motor of low thrust; however, we also find a trend that the longer the time allowed to perform the optimal transfer the more burns are required to satisfy optimality. Unfortunately, this usually increases the difficulty of computation. Both of the methods described use small-numbered burn solutions to determine solutions in large numbers of burns. One method is a homotopy method that corrects for problems that arise when a solution requires a new burn or coast arc for optimality. The other method is to simply patch together long transfers from smaller ones. An orbit correction problem is solved to develop this method. This method may also lead to a good guidance law for transfer orbits with long transfer times.

  9. The Unique Challenges of Conserving Large Old Trees.

    PubMed

    Lindenmayer, David B; Laurance, William F

    2016-06-01

    Large old trees play numerous critical ecological roles. They are susceptible to a plethora of interacting threats, in part because the attributes that confer a competitive advantage in intact ecosystems make them maladapted to rapidly changing, human-modified environments. Conserving large old trees will require surmounting a number of unresolved challenges. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. The decoding of majority-multiplexed signals by means of dyadic convolution

    NASA Astrophysics Data System (ADS)

    Losev, V. V.

    1980-09-01

    The maximum likelihood method can often not be used for the decoding of majority-multiplexed signals because of the large number of computations required. This paper describes a fast dyadic convolution transform which can be used to reduce the number of computations.

  11. Artificial diet optimized to produce normative adults of Diaprepes abbreviatus (Coleoptera: Curculionidae)

    USDA-ARS?s Scientific Manuscript database

    Insect diets are often complex mixtures of vitamins, salts, preservatives, and nutrients (carbohydrates, lipids and proteins). To determine the effect of varying the doses of multiple components, the traditional approach requires large factorial experiments resulting in very large numbers of treat...

  12. Reaction factoring and bipartite update graphs accelerate the Gillespie Algorithm for large-scale biochemical systems.

    PubMed

    Indurkhya, Sagar; Beal, Jacob

    2010-01-06

    ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models.

  13. Reaction Factoring and Bipartite Update Graphs Accelerate the Gillespie Algorithm for Large-Scale Biochemical Systems

    PubMed Central

    Indurkhya, Sagar; Beal, Jacob

    2010-01-01

    ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models. PMID:20066048

  14. Shor's factoring algorithm and modern cryptography. An illustration of the capabilities inherent in quantum computers

    NASA Astrophysics Data System (ADS)

    Gerjuoy, Edward

    2005-06-01

    The security of messages encoded via the widely used RSA public key encryption system rests on the enormous computational effort required to find the prime factors of a large number N using classical (conventional) computers. In 1994 Peter Shor showed that for sufficiently large N, a quantum computer could perform the factoring with much less computational effort. This paper endeavors to explain, in a fashion comprehensible to the nonexpert, the RSA encryption protocol; the various quantum computer manipulations constituting the Shor algorithm; how the Shor algorithm performs the factoring; and the precise sense in which a quantum computer employing Shor's algorithm can be said to accomplish the factoring of very large numbers with less computational effort than a classical computer. It is made apparent that factoring N generally requires many successive runs of the algorithm. Our analysis reveals that the probability of achieving a successful factorization on a single run is about twice as large as commonly quoted in the literature.

  15. Building a Web-based drug ordering system for hospitals: from requirements engineering to prototyping.

    PubMed

    Hübner, U; Klein, F; Hofstetter, J; Kammeyer, G; Seete, H

    2000-01-01

    Web-based drug ordering allows a growing number of hospitals without pharmacy to communicate seamlessly with their external pharmacy. Business process analysis and object oriented modelling performed together with the users at a pilot hospital resulted in a comprehensive picture of the user and business requirements for electronic drug ordering. The user requirements were further validated with the help of a software prototype. In order to capture the needs of a large number of users CAP10, a new method making use of pre-built models, is proposed. Solutions for coping with the technical requirements (interfacing the business software at the pharmacy) and with the legal requirements (signing the orders) are presented.

  16. Discriminative Hierarchical K-Means Tree for Large-Scale Image Classification.

    PubMed

    Chen, Shizhi; Yang, Xiaodong; Tian, Yingli

    2015-09-01

    A key challenge in large-scale image classification is how to achieve efficiency in terms of both computation and memory without compromising classification accuracy. The learning-based classifiers achieve the state-of-the-art accuracies, but have been criticized for the computational complexity that grows linearly with the number of classes. The nonparametric nearest neighbor (NN)-based classifiers naturally handle large numbers of categories, but incur prohibitively expensive computation and memory costs. In this brief, we present a novel classification scheme, i.e., discriminative hierarchical K-means tree (D-HKTree), which combines the advantages of both learning-based and NN-based classifiers. The complexity of the D-HKTree only grows sublinearly with the number of categories, which is much better than the recent hierarchical support vector machines-based methods. The memory requirement is the order of magnitude less than the recent Naïve Bayesian NN-based approaches. The proposed D-HKTree classification scheme is evaluated on several challenging benchmark databases and achieves the state-of-the-art accuracies, while with significantly lower computation cost and memory requirement.

  17. Numerical Schemes for Dynamically Orthogonal Equations of Stochastic Fluid and Ocean Flows

    DTIC Science & Technology

    2011-11-03

    stages of the simulation (see §5.1). Also, because the pdf is discrete, we calculate the mo- ments using the biased estimator CYiYj ≈ 1q ∑ r Yr,iYr,j...independent random variables. For problems that require large p (e.g. non-Gaussian) and large s (e.g. large ocean or fluid simulations ), the number of...Sc = ν̂/K̂ is the Schmidt number which is the ratio of kinematic viscosity ν̂ to molecular diffusivity K̂ for the density field, ĝ′ = ĝ (ρ̂max−ρ̂min

  18. Incremental wind tunnel testing of high lift systems

    NASA Astrophysics Data System (ADS)

    Victor, Pricop Mihai; Mircea, Boscoianu; Daniel-Eugeniu, Crunteanu

    2016-06-01

    Efficiency of trailing edge high lift systems is essential for long range future transport aircrafts evolving in the direction of laminar wings, because they have to compensate for the low performance of the leading edge devices. Modern high lift systems are subject of high performance requirements and constrained to simple actuation, combined with a reduced number of aerodynamic elements. Passive or active flow control is thus required for the performance enhancement. An experimental investigation of reduced kinematics flap combined with passive flow control took place in a low speed wind tunnel. The most important features of the experimental setup are the relatively large size, corresponding to a Reynolds number of about 2 Million, the sweep angle of 30 degrees corresponding to long range airliners with high sweep angle wings and the large number of flap settings and mechanical vortex generators. The model description, flap settings, methodology and results are presented.

  19. A Weight Comparison of Several Attitude Controls for Satellites

    NASA Technical Reports Server (NTRS)

    Adams, James J.; Chilton, Robert G.

    1959-01-01

    A brief theoretical study has been made for the purpose for estimating and comparing the weight of three different types of controls that can be used to change the attitude of a satellite. The three types of controls are jet reaction, inertia wheel, and a magnetic bar which interacts with the magnetic field of the earth. An idealized task which imposed severe requirements on the angular motion of the satellite was used as the basis for comparison. The results showed that a control for one axis can be devised which will weigh less than 1 percent of the total weight of the satellite. The inertia-wheel system offers weight-saving possibilities if a large number of cycles of operation are required, whereas the jet system would be preferred if a limited number of cycles are required. The magnetic-bar control requires such a large magnet that it is impractical for the example application but might be of value for supplying small trimming moments about certain axes.

  20. Optimal diet for production of normative adults of the Diaprepes root weevil, Diaprepes abbreviatus

    USDA-ARS?s Scientific Manuscript database

    Insect diets are complex mixtures of vitamins, salts, preservatives, and nutrients (carbohydrates, lipids and proteins). To determine the effect of varying the doses of multiple components, the traditional approach requires large factorial experiments resulting in very large numbers of treatment com...

  1. Medical Logistics Functional Integration Management To-Be Modeling Workshop: Improving Today For a Better Tomorrow

    DTIC Science & Technology

    1993-06-18

    A unique identifying number assigned by the contracting officer that is a binding agreement between the Government and a Vendor. quantity- of -beds The...repair it; maintenance contracts may be costly. Barriers to Implementation • Requires the large amount of funding to link a significant number of ...and follow-on requirements for maintenance, training, and installation. 22. Cross Sharing of Standard Contract Shells A3 2.88 Al112 Local activities

  2. Crew size affects fire fighting efficiency: A progress report on time studies of the fire fighting job.

    Treesearch

    Donald N. Matthews

    1940-01-01

    Fire fighting is still largely a hand-work job in the heavy cover and fuel conditions and rugged topography of the Douglas fir region, in spite of recent advances that have been made in %he use of machinery. Controlling a fire in this region requires immense amounts of work per unit of fire perimeter, so that large numbers of men are required to attack all but the...

  3. Ascertaining Validity in the Abstract Realm of PMESII Simulation Models: An Analysis of the Peace Support Operations Model (PSOM)

    DTIC Science & Technology

    2009-06-01

    simulation is the campaign-level Peace Support Operations Model (PSOM). This thesis provides a quantitative analysis of PSOM. The results are based ...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . 15. NUMBER OF PAGES 159...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . vi THIS PAGE

  4. Optical CDMA components requirements

    NASA Astrophysics Data System (ADS)

    Chan, James K.

    1998-08-01

    Optical CDMA is a complementary multiple access technology to WDMA. Optical CDMA potentially provides a large number of virtual optical channels for IXC, LEC and CLEC or supports a large number of high-speed users in LAN. In a network, it provides asynchronous, multi-rate, multi-user communication with network scalability, re-configurability (bandwidth on demand), and network security (provided by inherent CDMA coding). However, optical CDMA technology is less mature in comparison to WDMA. The components requirements are also different from WDMA. We have demonstrated a video transport/switching system over a distance of 40 Km using discrete optical components in our laboratory. We are currently pursuing PIC implementation. In this paper, we will describe the optical CDMA concept/features, the demonstration system, and the requirements of some critical optical components such as broadband optical source, broadband optical amplifier, spectral spreading/de- spreading, and fixed/programmable mask.

  5. Two proposed convergence criteria for Monte Carlo solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Pederson, S.P.; Booth, T.E.

    1992-01-01

    The central limit theorem (CLT) can be applied to a Monte Carlo solution if two requirements are satisfied: (1) The random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these two conditions are satisfied, a confidence interval (CI) based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the Monte Carlo tally being used. The Monte Carlo practitioner has a limited number of marginal methods to assess the fulfillment of the second requirement, such asmore » statistical error reduction proportional to 1/[radical]N with error magnitude guidelines. Two proposed methods are discussed in this paper to assist in deciding if N is large enough: estimating the relative variance of the variance (VOV) and examining the empirical history score probability density function (pdf).« less

  6. Portraiture lens concept in a mobile phone camera

    NASA Astrophysics Data System (ADS)

    Sheil, Conor J.; Goncharov, Alexander V.

    2017-11-01

    A small form-factor lens was designed for the purpose of portraiture photography, the size of which allows use within smartphone casing. The current general requirement of mobile cameras having good all-round performance results in a typical, familiar, many-element design. Such designs have little room for improvement, in terms of the available degrees of freedom and highly-demanding target metrics such as low f-number and wide field of view. However, the specific application of the current portraiture lens relaxed the requirement of an all-round high-performing lens, allowing improvement of certain aspects at the expense of others. With a main emphasis on reducing depth of field (DoF), the current design takes advantage of the simple geometrical relationship between DoF and pupil diameter. The system has a large aperture, while a reasonable f-number gives a relatively large focal length, requiring a catadioptric lens design with double ray path; hence, field of view is reduced. Compared to typical mobile lenses, the large diameter reduces depth of field by a factor of four.

  7. Size Reduction of Hamiltonian Matrix for Large-Scale Energy Band Calculations Using Plane Wave Bases

    NASA Astrophysics Data System (ADS)

    Morifuji, Masato

    2018-01-01

    We present a method of reducing the size of a Hamiltonian matrix used in calculations of electronic states. In the electronic states calculations using plane wave basis functions, a large number of plane waves are often required to obtain precise results. Even using state-of-the-art techniques, the Hamiltonian matrix often becomes very large. The large computational time and memory necessary for diagonalization limit the widespread use of band calculations. We show a procedure of deriving a reduced Hamiltonian constructed using a small number of low-energy bases by renormalizing high-energy bases. We demonstrate numerically that the significant speedup of eigenstates evaluation is achieved without losing accuracy.

  8. Damage identification using inverse methods.

    PubMed

    Friswell, Michael I

    2007-02-15

    This paper gives an overview of the use of inverse methods in damage detection and location, using measured vibration data. Inverse problems require the use of a model and the identification of uncertain parameters of this model. Damage is often local in nature and although the effect of the loss of stiffness may require only a small number of parameters, the lack of knowledge of the location means that a large number of candidate parameters must be included. This paper discusses a number of problems that exist with this approach to health monitoring, including modelling error, environmental effects, damage localization and regularization.

  9. D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO MIXTURE RAY.

    EPA Science Inventory

    Humans are exposed to mixtures of environmental compounds. A regulatory assumption is that the mixtures of chemicals act in an additive manner. However, this assumption requires experimental validation. Traditional experimental designs (full factorial) require a large number of e...

  10. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  11. Imputation of unordered markers and the impact on genomic selection accuracy

    USDA-ARS?s Scientific Manuscript database

    Genomic selection, a breeding method that promises to accelerate rates of genetic gain, requires dense, genome-wide marker data. Genotyping-by-sequencing can generate a large number of de novo markers. However, without a reference genome, these markers are unordered and typically have a large propo...

  12. Assuring Quality in Large-Scale Online Course Development

    ERIC Educational Resources Information Center

    Parscal, Tina; Riemer, Deborah

    2010-01-01

    Student demand for online education requires colleges and universities to rapidly expand the number of courses and programs offered online while maintaining high quality. This paper outlines two universities respective processes to assure quality in large-scale online programs that integrate instructional design, eBook custom publishing, Quality…

  13. Reducing work zone crashes by using vehicle's flashers as a warning sign : final report

    DOT National Transportation Integrated Search

    2009-01-01

    Rural two-lane highways constitute a large percentage of the highway system in Kansas. Preserving, expending, : and enhancing these highways require the set-up of a large number of one-lane, two-way work zones where traffic : safety has been a severe...

  14. 75 FR 70604 - Wireless E911 Location Accuracy Requirements

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-18

    ... carriers are unable to recover the substantial cost of constructing a large number of additional cell sites... characteristics, cell site density, overall system technology requirements, etc.) while, in either case, ensuring... the satellites and the handset. The more extensive the tree cover, the greater the difficulty the...

  15. Mean-field dynamo in a turbulence with shear and kinetic helicity fluctuations.

    PubMed

    Kleeorin, Nathan; Rogachevskii, Igor

    2008-03-01

    We study the effects of kinetic helicity fluctuations in a turbulence with large-scale shear using two different approaches: the spectral tau approximation and the second-order correlation approximation (or first-order smoothing approximation). These two approaches demonstrate that homogeneous kinetic helicity fluctuations alone with zero mean value in a sheared homogeneous turbulence cannot cause a large-scale dynamo. A mean-field dynamo is possible when the kinetic helicity fluctuations are inhomogeneous, which causes a nonzero mean alpha effect in a sheared turbulence. On the other hand, the shear-current effect can generate a large-scale magnetic field even in a homogeneous nonhelical turbulence with large-scale shear. This effect was investigated previously for large hydrodynamic and magnetic Reynolds numbers. In this study we examine the threshold required for the shear-current dynamo versus Reynolds number. We demonstrate that there is no need for a developed inertial range in order to maintain the shear-current dynamo (e.g., the threshold in the Reynolds number is of the order of 1).

  16. Bridge Programs in Illinois: Summaries, Outcomes, and Cross-Site Findings

    ERIC Educational Resources Information Center

    Bragg, D.; Harmon, T.; Kirby, C.; Kim, S.

    2010-01-01

    An increasing number of jobs in today's workforce require postsecondary education, yet large numbers of workers lack the essential skills and credentials to fill these jobs. The result is that many workers remain underemployed, reaching a ceiling early in their working careers. In 2007, the Joyce Foundation launched the Shifting Gears initiative…

  17. A Treatment of Computational Precision, Number Representation, and Large Integers in an Introductory Fortran Course

    ERIC Educational Resources Information Center

    Richardson, William H., Jr.

    2006-01-01

    Computational precision is sometimes given short shrift in a first programming course. Treating this topic requires discussing integer and floating-point number representations and inaccuracies that may result from their use. An example of a moderately simple programming problem from elementary statistics was examined. It forced students to…

  18. The Environment for Professional Interaction and Relevant Practical Experience in AACSB-Accredited Accounting Programs.

    ERIC Educational Resources Information Center

    Arlinghaus, Barry P.

    2002-01-01

    Responses from 276 of 1,128 faculty at Association to Advance Collegiate Schools of Business-accredited schools indicated that 231 were certified; only 96 served in professional associations; large numbers received financial support for professional activities, but only small numbers felt involvement or relevant experience (which are required for…

  19. Assays for the activities of polyamine biosynthetic enzymes using intact tissues

    Treesearch

    Rakesh Minocha; Stephanie Long; Hisae Maki; Subhash C. Minocha

    1999-01-01

    Traditionally, most enzyme assays utilize homogenized cell extracts with or without dialysis. Homogenization and centrifugation of large numbers of samples for screening of mutants and transgenic cell lines is quite cumbersome and generally requires sufficiently large amounts (hundreds of milligrams) of tissue. However, in situations where the tissue is available in...

  20. RF Environment Sensing Using Transceivers in Motion

    DTIC Science & Technology

    2014-05-02

    NUMBER 5b. GRANT NUMBER 5a. CONTRACT NUMBER Form Approved OMB NO. 0704-0188 3. DATES COVERED (From - To) - UU UU UU UU 02-05-2014 3-Aug-2012 2-Aug...Crossing Information in Wireless Networks, 2013 IEEE Global Conference on Signal and Information Processing. 03-DEC-13, . : , Dustin Maas, Joey Wilson...transceivers may be required to cover the entire monitored area. Second, and very importantly, there may not be sufficient time to deploy a large number of

  1. Using Internet-Based Language Testing Capacity to the Private Sector

    ERIC Educational Resources Information Center

    Garcia Laborda, Jesus

    2009-01-01

    Language testing has a large number of commercial applications in both the institutional and the private sectors. Some jobs in the health services sector or the public services sector require foreign language skills and these skills require continuous and efficient language assessments. Based on an experience developed through the cooperation of…

  2. NASA Integrated Vehicle Health Management (NIVHM) A New Simulation Architecture. Part I; An Investigation

    NASA Technical Reports Server (NTRS)

    Sheppard, Gene

    2005-01-01

    The overall objective of this research is to explore the development of a new architecture for simulating a vehicle health monitoring system in support of NASA s on-going Integrated Vehicle Health Monitoring (IVHM) initiative. As discussed in NASA MSFC s IVHM workshop on June 29-July 1, 2004, a large number of sensors will be required for a robust IVHM system. The current simulation architecture is incapable of simulating the large number of sensors required for IVHM. Processing the data from the sensors into a format that a human operator can understand and assimilate in a timely manner will require a paradigm shift. Data from a single sensor is, at best, suspect and in order to overcome this deficiency, redundancy will be required for tomorrow s sensors. The sensor technology of tomorrow will allow for the placement of thousands of sensors per square inch. The major obstacle to overcome will then be how we can mitigate the torrent of data from raw sensor data to useful information to computer assisted decisionmaking.

  3. An S N Algorithm for Modern Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Randal Scott

    2016-08-29

    LANL discrete ordinates transport packages are required to perform large, computationally intensive time-dependent calculations on massively parallel architectures, where even a single such calculation may need many months to complete. While KBA methods scale out well to very large numbers of compute nodes, we are limited by practical constraints on the number of such nodes we can actually apply to any given calculation. Instead, we describe a modified KBA algorithm that allows realization of the reductions in solution time offered by both the current, and future, architectural changes within a compute node.

  4. A Novel Method for Quick Assessment of Internal And External Radiation Exposure in the Aftermath of a Large Radiological Incident.

    PubMed

    Korir, Geoffrey; Karam, P Andrew

    2018-06-11

    In the event of a significant radiological release in a major urban area where a large number of people reside, it is inevitable that radiological screening and dose assessment must be conducted. Lives may be saved if an emergency response plan and radiological screening method are established for use in such cases. Thousands to tens of thousands of people might present themselves with some levels of external contamination and/or the potential for internal contamination. Each of these individuals will require varying degrees of radiological screening, and those with a high likelihood of internal and/or external contamination will require radiological assessment to determine the need for medical attention and decontamination. This sort of radiological assessment typically requires skilled health physicists, but there are insufficient numbers of health physicists in any city to perform this function for large populations, especially since many (e.g., those at medical facilities) are likely to be engaged at their designated institutions. The aim of this paper is therefore to develop and describe the technical basis for a novel, scoring-based methodology that can be used by non-health physicists for performing radiological assessment during such radiological events.

  5. Localization of multiple defects using the compact phased array (CPA) method

    NASA Astrophysics Data System (ADS)

    Senyurek, Volkan Y.; Baghalian, Amin; Tashakori, Shervin; McDaniel, Dwayne; Tansel, Ibrahim N.

    2018-01-01

    Array systems of transducers have found numerous applications in detection and localization of defects in structural health monitoring (SHM) of plate-like structures. Different types of array configurations and analysis algorithms have been used to improve the process of localization of defects. For accurate and reliable monitoring of large structures by array systems, a high number of actuator and sensor elements are often required. In this study, a compact phased array system consisting of only three piezoelectric elements is used in conjunction with an updated total focusing method (TFM) for localization of single and multiple defects in an aluminum plate. The accuracy of the localization process was greatly improved by including wave propagation information in TFM. Results indicated that the proposed CPA approach can locate single and multiple defects with high accuracy while decreasing the processing costs and the number of required transducers. This method can be utilized in critical applications such as aerospace structures where the use of a large number of transducers is not desirable.

  6. Applicability of a Conservative Margin Approach for Assessing NDE Flaw Detectability

    NASA Technical Reports Server (NTRS)

    Koshti, ajay M.

    2007-01-01

    Nondestructive Evaluation (NDE) procedures are required to detect flaws in structures with a high percentage detectability and high confidence. Conventional Probability of Detection (POD) methods are statistical in nature and require detection data from a relatively large number of flaw specimens. In many circumstances, due to the high cost and long lead time, it is impractical to build the large set of flaw specimens that is required by the conventional POD methodology. Therefore, in such situations it is desirable to have a flaw detectability estimation approach that allows for a reduced number of flaw specimens but provides a high degree of confidence in establishing the flaw detectability size. This paper presents an alternative approach called the conservative margin approach (CMA). To investigate the applicability of the CMA approach, flaw detectability sizes determined by the CMA and POD approaches have been compared on actual datasets. The results of these comparisons are presented and the applicability of the CMA approach is discussed.

  7. A model for estimating the impact of changes in children's vaccines.

    PubMed

    Simpson, K N; Biddle, A K; Rabinovich, N R

    1995-12-01

    To assist in strategic planning for the improvement of vaccines and vaccine programs, an economic model was developed and tested that estimates the potential impact of vaccine innovations on health outcomes and costs associated with vaccination and illness. A multistep, iterative process of data extraction/integration was used to develop the model and the scenarios. Parameter replication, sensitivity analysis, and expert review were used to validate the model. The greatest impact on the improvement of health is expected to result from the production of less reactogenic vaccines that require fewer inoculations for immunity. The greatest economic impact is predicted from improvements that decrease the number of inoculations required. Scenario analysis may be useful for integrating health outcomes and economic data into decision making. For childhood infections, this analysis indicates that large cost savings can be achieved in the future if we can improve vaccine efficacy so that the number of required inoculations is reduced. Such an improvement represents a large potential "payback" for the United States and might benefit other countries.

  8. Best Practices for Quality Improvement--Lessons from Top Ranked Engineering Institutions

    ERIC Educational Resources Information Center

    Rao, Potti Srinivasa; Viswanadhan, K. G.; Raghunandana, K.

    2015-01-01

    Maximum number of privately funded engineering institutions have been established in India in the last two decades to meet the growing needs of technical manpower required by the Engineering and IT companies as well as aspiring students after completion of the Pre-University Program. However, a large number of institutions have not been able to…

  9. CONSTITUENCY IN A SYSTEMIC DESCRIPTION OF THE ENGLISH CLAUSE.

    ERIC Educational Resources Information Center

    HUDSON, R.A.

    TWO WAYS OF DESCRIBING CLAUSES IN ENGLISH ARE DISCUSSED IN THIS PAPER. THE FIRST, TERMED THE "FEW-IC'S" APPROACH, IS A SEGMENTATION OF THE CLAUSE INTO A SMALL NUMBER OF IMMEDIATE CONSTITUENTS WHICH REQUIRE A LARGE NUMBER OF FURTHER SEGMENTATIONS BEFORE THE ULTIMATE CONSTITUENTS ARE REACHED. THE SECOND, "MANY-IC'S" APPROACH, IS A SEGMENTATION INTO…

  10. Expanding Computer Science Education in Schools: Understanding Teacher Experiences and Challenges

    ERIC Educational Resources Information Center

    Yadav, Aman; Gretter, Sarah; Hambrusch, Susanne; Sands, Phil

    2017-01-01

    The increased push for teaching computer science (CS) in schools in the United States requires training a large number of new K-12 teachers. The current efforts to increase the number of CS teachers have predominantly focused on training teachers from other content areas. In order to support these beginning CS teachers, we need to better…

  11. Participation and Collaborative Learning in Large Class Sizes: Wiki, Can You Help Me?

    ERIC Educational Resources Information Center

    de Arriba, Raúl

    2017-01-01

    Collaborative learning has a long tradition within higher education. However, its application in classes with a large number of students is complicated, since it is a teaching method that requires a high level of participation from the students and careful monitoring of the process by the educator. This article presents an experience in…

  12. Robust Coordination for Large Sets of Simple Rovers

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Agogino, Adrian

    2006-01-01

    The ability to coordinate sets of rovers in an unknown environment is critical to the long-term success of many of NASA;s exploration missions. Such coordination policies must have the ability to adapt in unmodeled or partially modeled domains and must be robust against environmental noise and rover failures. In addition such coordination policies must accommodate a large number of rovers, without excessive and burdensome hand-tuning. In this paper we present a distributed coordination method that addresses these issues in the domain of controlling a set of simple rovers. The application of these methods allows reliable and efficient robotic exploration in dangerous, dynamic, and previously unexplored domains. Most control policies for space missions are directly programmed by engineers or created through the use of planning tools, and are appropriate for single rover missions or missions requiring the coordination of a small number of rovers. Such methods typically require significant amounts of domain knowledge, and are difficult to scale to large numbers of rovers. The method described in this article aims to address cases where a large number of rovers need to coordinate to solve a complex time dependent problem in a noisy environment. In this approach, each rover decomposes a global utility, representing the overall goal of the system, into rover-specific utilities that properly assign credit to the rover s actions. Each rover then has the responsibility to create a control policy that maximizes its own rover-specific utility. We show a method of creating rover-utilities that are "aligned" with the global utility, such that when the rovers maximize their own utility, they also maximize the global utility. In addition we show that our method creates rover-utilities that allow the rovers to create their control policies quickly and reliably. Our distributed learning method allows large sets rovers be used unmodeled domains, while providing robustness against rover failures and changing environments. In experimental simulations we show that our method scales well with large numbers of rovers in addition to being robust against noisy sensor inputs and noisy servo control. The results show that our method is able to scale to large numbers of rovers and achieves up to 400% performance improvement over standard machine learning methods.

  13. A preprocessing strategy for helioseismic inversions

    NASA Astrophysics Data System (ADS)

    Christensen-Dalsgaard, J.; Thompson, M. J.

    1993-05-01

    Helioseismic inversion in general involves considerable computational expense, due to the large number of modes that is typically considered. This is true in particular of the widely used optimally localized averages (OLA) inversion methods, which require the inversion of one or more matrices whose order is the number of modes in the set. However, the number of practically independent pieces of information that a large helioseismic mode set contains is very much less than the number of modes, suggesting that the set might first be reduced before the expensive inversion is performed. We demonstrate with a model problem that by first performing a singular value decomposition the original problem may be transformed into a much smaller one, reducing considerably the cost of the OLA inversion and with no significant loss of information.

  14. Asymptotic properties of entanglement polytopes for large number of qubits

    NASA Astrophysics Data System (ADS)

    Maciążek, Tomasz; Sawicki, Adam

    2018-02-01

    Entanglement polytopes have been recently proposed as a way of witnessing the stochastic local operations and classical communication (SLOCC) multipartite entanglement classes using single particle information. We present first asymptotic results concerning the feasibility of this approach for a large number of qubits. In particular, we show that entanglement polytopes of the L-qubit system accumulate in the distance O(\\frac{1}{\\sqrt{L}}) from the point corresponding to the maximally mixed reduced one-qubit density matrices. This implies existence of a possibly large region where many entanglement polytopes overlap, i.e. where the witnessing power of entanglement polytopes is weak. Moreover, we argue that the witnessing power cannot be strengthened by any entanglement distillation protocol, as for large L the required purity is above current capability.

  15. ON-AIR, CLOSED-CIRCUIT INSTRUCTIONAL TELEVISION, THE 2500 MEGACYCLE BAND.

    ERIC Educational Resources Information Center

    LAPIN, STANLEY

    THE SATISFACTION OF THE BASIC REQUIREMENTS OF EDUCATIONAL TELEVISION BY THE ESTABLISHMENT OF THE INSTRUCTIONAL TV FIXED SERVICE WAS DISCUSSED. THE BASIC REQUIREMENTS OF EDUCATIONAL TELEVISION WERE THAT THE COST PER STUDENT OR PER STUDENT HOUR OF INSTRUCTION HAD TO BE ECONOMICAL, THAT A VERY LARGE NUMBER OF STUDENTS HAD TO BE SERVED, THAT THE…

  16. Markov-modulated Markov chains and the covarion process of molecular evolution.

    PubMed

    Galtier, N; Jean-Marie, A

    2004-01-01

    The covarion (or site specific rate variation, SSRV) process of biological sequence evolution is a process by which the evolutionary rate of a nucleotide/amino acid/codon position can change in time. In this paper, we introduce time-continuous, space-discrete, Markov-modulated Markov chains as a model for representing SSRV processes, generalizing existing theory to any model of rate change. We propose a fast algorithm for diagonalizing the generator matrix of relevant Markov-modulated Markov processes. This algorithm makes phylogeny likelihood calculation tractable even for a large number of rate classes and a large number of states, so that SSRV models become applicable to amino acid or codon sequence datasets. Using this algorithm, we investigate the accuracy of the discrete approximation to the Gamma distribution of evolutionary rates, widely used in molecular phylogeny. We show that a relatively large number of classes is required to achieve accurate approximation of the exact likelihood when the number of analyzed sequences exceeds 20, both under the SSRV and among site rate variation (ASRV) models.

  17. Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.

  18. Big questions, big science: meeting the challenges of global ecology.

    PubMed

    Schimel, David; Keller, Michael

    2015-04-01

    Ecologists are increasingly tackling questions that require significant infrastucture, large experiments, networks of observations, and complex data and computation. Key hypotheses in ecology increasingly require more investment, and larger data sets to be tested than can be collected by a single investigator's or s group of investigator's labs, sustained for longer than a typical grant. Large-scale projects are expensive, so their scientific return on the investment has to justify the opportunity cost-the science foregone because resources were expended on a large project rather than supporting a number of individual projects. In addition, their management must be accountable and efficient in the use of significant resources, requiring the use of formal systems engineering and project management to mitigate risk of failure. Mapping the scientific method into formal project management requires both scientists able to work in the context, and a project implementation team sensitive to the unique requirements of ecology. Sponsoring agencies, under pressure from external and internal forces, experience many pressures that push them towards counterproductive project management but a scientific community aware and experienced in large project science can mitigate these tendencies. For big ecology to result in great science, ecologists must become informed, aware and engaged in the advocacy and governance of large ecological projects.

  19. A full picture of large lepton number asymmetries of the Universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barenboim, Gabriela; Park, Wan-Il, E-mail: Gabriela.Barenboim@uv.es, E-mail: wipark@jbnu.ac.kr

    A large lepton number asymmetry of O(0.1−1) at present Universe might not only be allowed but also necessary for consistency among cosmological data. We show that, if a sizeable lepton number asymmetry were produced before the electroweak phase transition, the requirement for not producing too much baryon number asymmetry through sphalerons processes, forces the high scale lepton number asymmetry to be larger than about 03. Therefore a mild entropy release causing O(10-100) suppression of pre-existing particle density should take place, when the background temperature of the Universe is around T = O(10{sup −2}-10{sup 2}) GeV for a large but experimentallymore » consistent asymmetry to be present today. We also show that such a mild entropy production can be obtained by the late-time decays of the saxion, constraining the parameters of the Peccei-Quinn sector such as the mass and the vacuum expectation value of the saxion field to be m {sub φ} ∼> O(10) TeV and φ{sub 0} ∼> O(10{sup 14}) GeV, respectively.« less

  20. Resolution requirements for aero-optical simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mani, Ali; Wang Meng; Moin, Parviz

    2008-11-10

    Analytical criteria are developed to estimate the error of aero-optical computations due to inadequate spatial resolution of refractive index fields in high Reynolds number flow simulations. The unresolved turbulence structures are assumed to be locally isotropic and at low turbulent Mach number. Based on the Kolmogorov spectrum for the unresolved structures, the computational error of the optical path length is estimated and linked to the resulting error in the computed far-field optical irradiance. It is shown that in the high Reynolds number limit, for a given geometry and Mach number, the spatial resolution required to capture aero-optics within a pre-specifiedmore » error margin does not scale with Reynolds number. In typical aero-optical applications this resolution requirement is much lower than the resolution required for direct numerical simulation, and therefore, a typical large-eddy simulation can capture the aero-optical effects. The analysis is extended to complex turbulent flow simulations in which non-uniform grid spacings are used to better resolve the local turbulence structures. As a demonstration, the analysis is used to estimate the error of aero-optical computation for an optical beam passing through turbulent wake of flow over a cylinder.« less

  1. Impacts of savanna trees on forage quality for a large African herbivore

    PubMed Central

    De Kroon, Hans; Prins, Herbert H. T.

    2008-01-01

    Recently, cover of large trees in African savannas has rapidly declined due to elephant pressure, frequent fires and charcoal production. The reduction in large trees could have consequences for large herbivores through a change in forage quality. In Tarangire National Park, in Northern Tanzania, we studied the impact of large savanna trees on forage quality for wildebeest by collecting samples of dominant grass species in open grassland and under and around large Acacia tortilis trees. Grasses growing under trees had a much higher forage quality than grasses from the open field indicated by a more favourable leaf/stem ratio and higher protein and lower fibre concentrations. Analysing the grass leaf data with a linear programming model indicated that large savanna trees could be essential for the survival of wildebeest, the dominant herbivore in Tarangire. Due to the high fibre content and low nutrient and protein concentrations of grasses from the open field, maximum fibre intake is reached before nutrient requirements are satisfied. All requirements can only be satisfied by combining forage from open grassland with either forage from under or around tree canopies. Forage quality was also higher around dead trees than in the open field. So forage quality does not reduce immediately after trees die which explains why negative effects of reduced tree numbers probably go initially unnoticed. In conclusion our results suggest that continued destruction of large trees could affect future numbers of large herbivores in African savannas and better protection of large trees is probably necessary to sustain high animal densities in these ecosystems. PMID:18309522

  2. A fast time-difference inverse solver for 3D EIT with application to lung imaging.

    PubMed

    Javaherian, Ashkan; Soleimani, Manuchehr; Moeller, Knut

    2016-08-01

    A class of sparse optimization techniques that require solely matrix-vector products, rather than an explicit access to the forward matrix and its transpose, has been paid much attention in the recent decade for dealing with large-scale inverse problems. This study tailors application of the so-called Gradient Projection for Sparse Reconstruction (GPSR) to large-scale time-difference three-dimensional electrical impedance tomography (3D EIT). 3D EIT typically suffers from the need for a large number of voxels to cover the whole domain, so its application to real-time imaging, for example monitoring of lung function, remains scarce since the large number of degrees of freedom of the problem extremely increases storage space and reconstruction time. This study shows the great potential of the GPSR for large-size time-difference 3D EIT. Further studies are needed to improve its accuracy for imaging small-size anomalies.

  3. Sample sizes to control error estimates in determining soil bulk density in California forest soils

    Treesearch

    Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber

    2016-01-01

    Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...

  4. The Effect of Number of Ability Intervals on the Stability of Item Bias Detection.

    ERIC Educational Resources Information Center

    Loyd, Brenda

    The chi-square procedure has been suggested as a viable index of test bias because it provides the best agreement with the three parameter item characteristic curve without the large sample requirement, computer complexity, and cost. This study examines the effect of using different numbers of ability intervals on the reliability of chi-square…

  5. Dynamic Database. Efficiently Convert Massive Quantities of Sensor Data into Actionable Information for Tactical Commanders

    DTIC Science & Technology

    2000-06-01

    As the number of sensors, platforms, exploitation sites, and command and control nodes continues to grow in response to Joint Vision 2010 information ... dominance requirements, Commanders and analysts will have an ever increasing need to collect and process vast amounts of data over wide areas using a large number of disparate sensors and information gathering sources.

  6. The gating effect by thousands of bubble-propelled micromotors in macroscale channels

    NASA Astrophysics Data System (ADS)

    Teo, Wei Zhe; Wang, Hong; Pumera, Martin

    2015-07-01

    Increasing interest in the utilization of self-propelled micro-/nanomotors for environmental remediation requires the examination of their efficiency at the macroscale level. As such, we investigated the effect of micro-/nanomotors' propulsion and bubbling on the rate of sodium hydroxide dissolution and the subsequent dispersion of OH- ions across more than 30 cm, so as to understand how these factors might affect the dispersion of remediation agents in real systems which might require these agents to travel long distances to reach the pollutants. Experimental results showed that the presence of large numbers of active bubble-propelled tubular bimetallic Cu/Pt micromotors (4.5 × 104) induced a gating effect on the dissolution and dispersion process, slowing down the change in pH of the solution considerably. The retardation was found to be dependent on the number of active micromotors present in the range of 1.5 × 104 to 4.5 × 104 micromotors. At lower numbers (0.75 × 104), however, propelling micromotors did speed up the dissolution and dispersion process. The understanding of the combined effects of large number of micro-/nanomotors' motion and bubbling on its macroscale mixing behavior is of significant importance for future applications of these devices.

  7. Motivators that Do Not Motivate: The Case of Chinese EFL Learners and the Influence of Culture on Motivation

    ERIC Educational Resources Information Center

    Chen, Judy F.; Warden, Clyde A.; Chang, Huo-Tsan

    2005-01-01

    Language learning motivation plays an important role in both research and teaching, yet language learners are still largely understood in terms of North American and European cultural values. This research explored language learning motivation constructs in a Chinese cultural setting, where large numbers of students are required to study English.…

  8. Revision of the Rawls et al. (1982) pedotransfer functions for their applicability to US croplands

    USDA-ARS?s Scientific Manuscript database

    Large scale environmental impact studies typically involve the use of simulation models and require a variety of inputs, some of which may need to be estimated in absence of adequate measured data. As an example, soil water retention needs to be estimated for a large number of soils that are to be u...

  9. Equation solvers for distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    A large number of scientific and engineering problems require the rapid solution of large systems of simultaneous equations. The performance of parallel computers in this area now dwarfs traditional vector computers by nearly an order of magnitude. This talk describes the major issues involved in parallel equation solvers with particular emphasis on the Intel Paragon, IBM SP-1 and SP-2 processors.

  10. Does Decision Quality (Always) Increase with the Size of Information Samples? Some Vicissitudes in Applying the Law of Large Numbers

    ERIC Educational Resources Information Center

    Fiedler, Klaus; Kareev, Yaakov

    2006-01-01

    Adaptive decision making requires that contingencies between decision options and their relative assets be assessed accurately and quickly. The present research addresses the challenging notion that contingencies may be more visible from small than from large samples of observations. An algorithmic account for such a seemingly paradoxical effect…

  11. Packed Bed Bioreactor for the Isolation and Expansion of Placental-Derived Mesenchymal Stromal Cells

    PubMed Central

    Osiecki, Michael J.; Michl, Thomas D.; Kul Babur, Betul; Kabiri, Mahboubeh; Atkinson, Kerry; Lott, William B.; Griesser, Hans J.; Doran, Michael R.

    2015-01-01

    Large numbers of Mesenchymal stem/stromal cells (MSCs) are required for clinical relevant doses to treat a number of diseases. To economically manufacture these MSCs, an automated bioreactor system will be required. Herein we describe the development of a scalable closed-system, packed bed bioreactor suitable for large-scale MSCs expansion. The packed bed was formed from fused polystyrene pellets that were air plasma treated to endow them with a surface chemistry similar to traditional tissue culture plastic. The packed bed was encased within a gas permeable shell to decouple the medium nutrient supply and gas exchange. This enabled a significant reduction in medium flow rates, thus reducing shear and even facilitating single pass medium exchange. The system was optimised in a small-scale bioreactor format (160 cm2) with murine-derived green fluorescent protein-expressing MSCs, and then scaled-up to a 2800 cm2 format. We demonstrated that placental derived MSCs could be isolated directly within the bioreactor and subsequently expanded. Our results demonstrate that the closed system large-scale packed bed bioreactor is an effective and scalable tool for large-scale isolation and expansion of MSCs. PMID:26660475

  12. Packed Bed Bioreactor for the Isolation and Expansion of Placental-Derived Mesenchymal Stromal Cells.

    PubMed

    Osiecki, Michael J; Michl, Thomas D; Kul Babur, Betul; Kabiri, Mahboubeh; Atkinson, Kerry; Lott, William B; Griesser, Hans J; Doran, Michael R

    2015-01-01

    Large numbers of Mesenchymal stem/stromal cells (MSCs) are required for clinical relevant doses to treat a number of diseases. To economically manufacture these MSCs, an automated bioreactor system will be required. Herein we describe the development of a scalable closed-system, packed bed bioreactor suitable for large-scale MSCs expansion. The packed bed was formed from fused polystyrene pellets that were air plasma treated to endow them with a surface chemistry similar to traditional tissue culture plastic. The packed bed was encased within a gas permeable shell to decouple the medium nutrient supply and gas exchange. This enabled a significant reduction in medium flow rates, thus reducing shear and even facilitating single pass medium exchange. The system was optimised in a small-scale bioreactor format (160 cm2) with murine-derived green fluorescent protein-expressing MSCs, and then scaled-up to a 2800 cm2 format. We demonstrated that placental derived MSCs could be isolated directly within the bioreactor and subsequently expanded. Our results demonstrate that the closed system large-scale packed bed bioreactor is an effective and scalable tool for large-scale isolation and expansion of MSCs.

  13. The Electrophysiological Biosensor for Batch-Measurement of Cell Signals

    NASA Astrophysics Data System (ADS)

    Suzuki, Kengo; Tanabe, Masato; Ezaki, Takahiro; Konishi, Satoshi; Oka, Hiroaki; Ozaki, Nobuhiko

    This paper presents the development of electrophysiological biosensor. The developed sensor allows a batch-measurement by detecting all signals from a large number of cells together. The developed sensor employs the same measurement principle as the patch-clamp technique. A single cell is sucked and clamped in a micro hole with detecting electrode. Detecting electrodes in arrayed micro holes are connected together for the batch-measurement of signals a large number of cell signals. Furthermore, an array of sensors for batch-measurement is designed to improve measurement-throughput to satisfy requirements for the drug screening application.

  14. Multiple damage identification on a wind turbine blade using a structural neural system

    NASA Astrophysics Data System (ADS)

    Kirikera, Goutham R.; Schulz, Mark J.; Sundaresan, Mannur J.

    2007-04-01

    A large number of sensors are required to perform real-time structural health monitoring (SHM) to detect acoustic emissions (AE) produced by damage growth on large complicated structures. This requires a large number of high sampling rate data acquisition channels to analyze high frequency signals. To overcome the cost and complexity of having such a large data acquisition system, a structural neural system (SNS) was developed. The SNS reduces the required number of data acquisition channels and predicts the location of damage within a sensor grid. The sensor grid uses interconnected sensor nodes to form continuous sensors. The combination of continuous sensors and the biomimetic parallel processing of the SNS tremendously reduce the complexity of SHM. A wave simulation algorithm (WSA) was developed to understand the flexural wave propagation in composite structures and to utilize the code for developing the SNS. Simulation of AE responses in a plate and comparison with experimental results are shown in the paper. The SNS was recently tested by a team of researchers from University of Cincinnati and North Carolina A&T State University during a quasi-static proof test of a 9 meter long wind turbine blade at the National Renewable Energy Laboratory (NREL) test facility in Golden, Colorado. Twelve piezoelectric sensor nodes were used to form four continuous sensors to monitor the condition of the blade during the test. The four continuous sensors are used as inputs to the SNS. There are only two analog output channels of the SNS, and these signals are digitized and analyzed in a computer to detect damage. In the test of the wind turbine blade, multiple damages were identified and later verified by sectioning of the blade. The results of damage identification using the SNS during this proof test will be shown in this paper. Overall, the SNS is very sensitive and can detect damage on complex structures with ribs, joints, and different materials, and the system relatively inexpensive and simple to implement on large structures.

  15. Space station needs, attributes and architectural options. Volume 1: Executive summary NASA

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The uses alignment plan was implemented. The existing data bank was used to define a large number of station requirements. Ten to 20 valid mission scenarios were developed. Architectural options as they are influenced by communications operations, subsystem evolvability, and required technology growth are defined. Costing of evolutionary concepts, alternative approaches, and options, was based on minimum design details.

  16. Spectral Relative Standard Deviation: A Practical Benchmark in Metabolomics

    EPA Science Inventory

    Metabolomics datasets, by definition, comprise of measurements of large numbers of metabolites. Both technical (analytical) and biological factors will induce variation within these measurements that is not consistent across all metabolites. Consequently, criteria are required to...

  17. Spatial solitons in a semiconductor microresonator

    NASA Astrophysics Data System (ADS)

    Taranenko, V. B.; Ganne, I.; Kuszelewicz, R.; Weiss, C. O.

    We show experimentally the existence of bright and dark spatial solitons in a passive quantum-well-semi-conductor resonator of large Fresnel number with mixed absorptive defocusing nonlinearity. Several of the solitons can exist simultaneously as required for applications.

  18. Multi-Modal Traveler Information System - Gateway Functional Requirements

    DOT National Transportation Integrated Search

    1997-11-17

    The Multi-Modal Traveler Information System (MMTIS) project involves a large number of Intelligent Transportation System (ITS) related tasks. It involves research of all ITS initiatives in the Gary-Chicago-Milwaukee (GCM) Corridor which are currently...

  19. Multi-Modal Traveler Information System - Gateway Interface Control Requirements

    DOT National Transportation Integrated Search

    1997-10-30

    The Multi-Modal Traveler Information System (MMTIS) project involves a large number of Intelligent Transportation System (ITS) related tasks. It involves research of all ITS initiatives in the Gary-Chicago-Milwaukee (GCM) Corridor which are currently...

  20. Flow-aggregated traffic-driven label mapping in label-switching networks

    NASA Astrophysics Data System (ADS)

    Nagami, Kenichi; Katsube, Yasuhiro; Esaki, Hiroshi; Nakamura, Osamu

    1998-12-01

    Label switching technology enables high performance, flexible, layer-3 packet forwarding based on the fixed length label information mapped to the layer-3 packet stream. A Label Switching Router (LSR) forwards layer-3 packets based on their label information mapped to the layer-3 address information as well as their layer-3 address information. This paper evaluates the required number of labels under traffic-driven label mapping policy using the real backbone traffic traces. The evaluation shows that the label mapping policy requires a large number of labels. In order to reduce the required number of labels, we propose a label mapping policy which is a traffic-driven label mapping for the traffic toward the same destination network. The evaluation shows that the proposed label mapping policy requires only about one tenth as many labels compared with the traffic-driven label mapping for the host-pair packet stream,and the topology-driven label mapping for the destination network packet stream.

  1. Communication architecture for large geostationary platforms

    NASA Technical Reports Server (NTRS)

    Bond, F. E.

    1979-01-01

    Large platforms have been proposed for supporting multipurpose communication payloads to exploit economy of scale, reduce congestion in the geostationary orbit, provide interconnectivity between diverse earth stations, and obtain significant frequency reuse with large multibeam antennas. This paper addresses a specific system design, starting with traffic projections in the next two decades and discussing tradeoffs and design approaches for major components including: antennas, transponders, and switches. Other issues explored are selection of frequency bands, modulation, multiple access, switching methods, and techniques for servicing areas with nonuniform traffic demands. Three-major services are considered: a high-volume trunking system, a direct-to-user system, and a broadcast system for video distribution and similar functions. Estimates of payload weight and d.c. power requirements are presented. Other subjects treated are: considerations of equipment layout for servicing by an orbit transfer vehicle, mechanical stability requirements for the large antennas, and reliability aspects of the large number of transponders employed.

  2. Accurate measurement of transgene copy number in crop plants using droplet digital PCR.

    PubMed

    Collier, Ray; Dasgupta, Kasturi; Xing, Yan-Ping; Hernandez, Bryan Tarape; Shao, Min; Rohozinski, Dominica; Kovak, Emma; Lin, Jeanie; de Oliveira, Maria Luiza P; Stover, Ed; McCue, Kent F; Harmon, Frank G; Blechl, Ann; Thomson, James G; Thilmony, Roger

    2017-06-01

    Genetic transformation is a powerful means for the improvement of crop plants, but requires labor- and resource-intensive methods. An efficient method for identifying single-copy transgene insertion events from a population of independent transgenic lines is desirable. Currently, transgene copy number is estimated by either Southern blot hybridization analyses or quantitative polymerase chain reaction (qPCR) experiments. Southern hybridization is a convincing and reliable method, but it also is expensive, time-consuming and often requires a large amount of genomic DNA and radioactively labeled probes. Alternatively, qPCR requires less DNA and is potentially simpler to perform, but its results can lack the accuracy and precision needed to confidently distinguish between one- and two-copy events in transgenic plants with large genomes. To address this need, we developed a droplet digital PCR-based method for transgene copy number measurement in an array of crops: rice, citrus, potato, maize, tomato and wheat. The method utilizes specific primers to amplify target transgenes, and endogenous reference genes in a single duplexed reaction containing thousands of droplets. Endpoint amplicon production in the droplets is detected and quantified using sequence-specific fluorescently labeled probes. The results demonstrate that this approach can generate confident copy number measurements in independent transgenic lines in these crop species. This method and the compendium of probes and primers will be a useful resource for the plant research community, enabling the simple and accurate determination of transgene copy number in these six important crop species. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pawlowski, Alexander; Splitter, Derek A

    It is well known that spark ignited engine performance and efficiency is closely coupled to fuel octane number. The present work combines historical and recent trends in spark ignition engines to build a database of engine design, performance, and fuel octane requirements over the past 80 years. The database consists of engine compression ratio, required fuel octane number, peak mean effective pressure, specific output, and combined unadjusted fuel economy for passenger vehicles and light trucks. Recent trends in engine performance, efficiency, and fuel octane number requirement were used to develop correlations of fuel octane number utilization, performance, specific output. Themore » results show that historically, engine compression ratio and specific output have been strongly coupled to fuel octane number. However, over the last 15 years the sales weighted averages of compression ratios, specific output, and fuel economy have increased, while the fuel octane number requirement has remained largely unchanged. Using the developed correlations, 10-year-out projections of engine performance, design, and fuel economy are estimated for various fuel octane numbers, both with and without turbocharging. The 10-year-out projection shows that only by keeping power neutral while using 105 RON fuel will allow the vehicle fleet to meet CAFE targets if only the engine is relied upon to decrease fuel consumption. If 98 RON fuel is used, a power neutral fleet will have to reduce vehicle weight by 5%.« less

  4. Design and test of a natural laminar flow/large Reynolds number airfoil with a high design cruise lift coefficient

    NASA Technical Reports Server (NTRS)

    Kolesar, C. E.

    1987-01-01

    Research activity on an airfoil designed for a large airplane capable of very long endurance times at a low Mach number of 0.22 is examined. Airplane mission objectives and design optimization resulted in requirements for a very high design lift coefficient and a large amount of laminar flow at high Reynolds number to increase the lift/drag ratio and reduce the loiter lift coefficient. Natural laminar flow was selected instead of distributed mechanical suction for the measurement technique. A design lift coefficient of 1.5 was identified as the highest which could be achieved with a large extent of laminar flow. A single element airfoil was designed using an inverse boundary layer solution and inverse airfoil design computer codes to create an airfoil section that would achieve performance goals. The design process and results, including airfoil shape, pressure distributions, and aerodynamic characteristics are presented. A two dimensional wind tunnel model was constructed and tested in a NASA Low Turbulence Pressure Tunnel which enabled testing at full scale design Reynolds number. A comparison is made between theoretical and measured results to establish accuracy and quality of the airfoil design technique.

  5. Metal stack optimization for low-power and high-density for N7-N5

    NASA Astrophysics Data System (ADS)

    Raghavan, P.; Firouzi, F.; Matti, L.; Debacker, P.; Baert, R.; Sherazi, S. M. Y.; Trivkovic, D.; Gerousis, V.; Dusa, M.; Ryckaert, J.; Tokei, Z.; Verkest, D.; McIntyre, G.; Ronse, K.

    2016-03-01

    One of the key challenges while scaling logic down to N7 and N5 is the requirement of self-aligned multiple patterning for the metal stack. This comes with a large cost of the backend cost and therefore a careful stack optimization is required. Various layers in the stack have different purposes and therefore their choice of pitch and number of layers is critical. Furthermore, when in ultra scaled dimensions of N7 or N5, the number of patterning options are also much larger ranging from multiple LE, EUV to SADP/SAQP. The right choice of these are also needed patterning techniques that use a full grating of wires like SADP/SAQP techniques introduce high level of metal dummies into the design. This implies a large capacitance penalty to the design therefore having large performance and power penalties. This is often mitigated with extra masking strategies. This paper discusses a holistic view of metal stack optimization from standard cell level all the way to routing and the corresponding trade-off that exist for this space.

  6. Molecular inversion probe assay for allelic quantitation

    PubMed Central

    Ji, Hanlee; Welch, Katrina

    2010-01-01

    Molecular inversion probe (MIP) technology has been demonstrated to be a robust platform for large-scale dual genotyping and copy number analysis. Applications in human genomic and genetic studies include the possibility of running dual germline genotyping and combined copy number variation ascertainment. MIPs analyze large numbers of specific genetic target sequences in parallel, relying on interrogation of a barcode tag, rather than direct hybridization of genomic DNA to an array. The MIP approach does not replace, but is complementary to many of the copy number technologies being performed today. Some specific advantages of MIP technology include: Less DNA required (37 ng vs. 250 ng), DNA quality less important, more dynamic range (amplifications detected up to copy number 60), allele specific information “cleaner” (less SNP crosstalk/contamination), and quality of markers better (fewer individual MIPs versus SNPs needed to identify copy number changes). MIPs can be considered a candidate gene (targeted whole genome) approach and can find specific areas of interest that otherwise may be missed with other methods. PMID:19488872

  7. Digital color representation

    DOEpatents

    White, James M.; Faber, Vance; Saltzman, Jeffrey S.

    1992-01-01

    An image population having a large number of attributes is processed to form a display population with a predetermined smaller number of attributes which represent the larger number of attributes. In a particular application, the color values in an image are compressed for storage in a discrete lookup table (LUT) where an 8-bit data signal is enabled to form a display of 24-bit color values. The LUT is formed in a sampling and averaging process from the image color values with no requirement to define discrete Voronoi regions for color compression. Image color values are assigned 8-bit pointers to their closest LUT value whereby data processing requires only the 8-bit pointer value to provide 24-bit color values from the LUT.

  8. Novel Multiplexing Technique for Detector and Mixer Arrays

    NASA Technical Reports Server (NTRS)

    Karasik, Boris S.; McGrath, William R.

    2001-01-01

    Future submillimeter and far-infrared space telescopes will require large-format (many 1000's of elements) imaging detector arrays to perform state-of-the-art astronomical observations. A crucial issue related to a focal plane array is a readout scheme which is compatible with large numbers of cryogenically-cooled (typically < 1 K) detectors elements. When the number of elements becomes of the order of thousands, the physical layout for individual readout amplifiers becomes nearly impossible to realize for practical systems. Another important concern is the large number of wires leading to a 0.1-0.3 K platform. In the case of superconducting transition edge sensors (TES), a scheme for time-division multiplexing of SQUID read-out amplifiers has been recently demonstrated. In this scheme the number of SQUIDs is equal to the number (N) of the detectors, but only one SQUID is turned on at a time. The SQUIDs are connected in series in each column of the array, so the number of wires leading to the amplifiers can be reduced, but it is still of the order of N. Another approach uses a frequency domain multiplexing scheme of the bolometer array. The bolometers are biased with ac currents whose frequencies are individual for each element and are much higher than the bolometer bandwidth. The output signals are connected in series in a summing loop which is coupled to a single SQUID amplifier. The total number of channels depends on the ratio between the SQUID bandwidth and the bolometer bandwidth and can be at least 100 according to the authors. An important concern about this technique is a contribution of the out-of-band Johnson noise which multiplies by factor N(exp 1/2) for each frequency channel. We propose a novel solution for large format arrays based on the Hadamard transform coding technique which requires only one amplifier to read out the entire array of potentially many 1000's of elements and uses approximately 10 wires between the cold stage and room temperature electronics. This can significantly reduce the complexity of the readout circuits.

  9. Comparison of different estimation techniques for biomass concentration in large scale yeast fermentation.

    PubMed

    Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U

    2011-04-01

    In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Many multicenter trials had few events per center, requiring analysis via random-effects models or GEEs.

    PubMed

    Kahan, Brennan C; Harhay, Michael O

    2015-12-01

    Adjustment for center in multicenter trials is recommended when there are between-center differences or when randomization has been stratified by center. However, common methods of analysis (such as fixed-effects, Mantel-Haenszel, or stratified Cox models) often require a large number of patients or events per center to perform well. We reviewed 206 multicenter randomized trials published in four general medical journals to assess the average number of patients and events per center and determine whether appropriate methods of analysis were used in trials with few patients or events per center. The median number of events per center/treatment arm combination for trials using a binary or survival outcome was 3 (interquartile range, 1-10). Sixteen percent of trials had less than 1 event per center/treatment combination, 50% fewer than 3, and 63% fewer than 5. Of the trials which adjusted for center using a method of analysis which requires a large number of events per center, 6% had less than 1 event per center-treatment combination, 25% fewer than 3, and 50% fewer than 5. Methods of analysis that allow for few events per center, such as random-effects models or generalized estimating equations (GEEs), were rarely used. Many multicenter trials contain few events per center. Adjustment for center using random-effects models or GEE with model-based (non-robust) standard errors may be beneficial in these scenarios. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. SWAT: Model use, calibration, and validation

    USDA-ARS?s Scientific Manuscript database

    SWAT (Soil and Water Assessment Tool) is a comprehensive, semi-distributed river basin model that requires a large number of input parameters which complicates model parameterization and calibration. Several calibration techniques have been developed for SWAT including manual calibration procedures...

  12. Coherent photonic beamformer for a Ka-band phased array antenna receiver implemented in silicon photonic integrated circuit

    NASA Astrophysics Data System (ADS)

    Duarte, V. C.; Peczek, A.; Drummond, M. V.; Nogueira, R. N.; Winzer, G.; Petousi, D.; Zimmermann, L.

    2017-09-01

    The generation of satellite communications with flexible and efficient transmission of radio signals requires a large number of low interfering beams and a maximum exploitation of the available frequency spectrum.

  13. Reduction of Bridge Deck Cracking through Alternative Material Usage

    DOT National Transportation Integrated Search

    2017-12-01

    ODOT routinely deploys a large number of continuous span structural slab bridges. Despite being designed to strictly satisfy all the relevant AASHTO and ODOT BDM requirements, many such bridge decks show transverse cracks, with widths greater than th...

  14. Rating and analysis of continuous girder bridges.

    DOT National Transportation Integrated Search

    1980-01-01

    Federal regulations prompted as a result of bridge failures require the rating of bridge structures for which federal funds will be utilized for rehabilitation and replacement. The large number of bridges in Virginia subject to being rated makes such...

  15. Review of sign overlay procedures in Virginia.

    DOT National Transportation Integrated Search

    1984-01-01

    Maintaining the large number of signs on the state's roads demands a substantial effort, especially now that many of the signs erected during construction of the interstate and urban arterial systems are deteriorating to the point of requiring replac...

  16. Multi-Modal Traveler Information System - GCM Corridor Architecture Interface Control Requirements

    DOT National Transportation Integrated Search

    1997-10-31

    The Multi-Modal Traveler Information System (MMTIS) project involves a large number of Intelligent Transportation System (ITS) related tasks. It involves research of all ITS initiatives in the Gary-Chicago-Milwaukee (GCM) Corridor which are currently...

  17. Multi-Modal Traveler Information System - GCM Corridor Architecture Functional Requirements

    DOT National Transportation Integrated Search

    1997-11-17

    The Multi-Modal Traveler Information System (MMTIS) project involves a large number of Intelligent Transportation System (ITS) related tasks. It involves research of all ITS initiatives in the Gary-Chicago-Milwaukee (GCM) Corridor which are currently...

  18. Evaluation of a rapid diagnostic field test kit for identification of Phytophthora ramorum, P. kernoviae and other Phytophthora species at the point of inspection

    Treesearch

    C.R. Lane; E. Hobden; L. Laurenson; V.C. Barton; K.J.D. Hughes; H. Swan; N. Boonham; A.J. Inman

    2008-01-01

    Plant health regulations to prevent the introduction and spread of Phytophthora ramorum and P. kernoviae require rapid, cost effective diagnostic methods for screening large numbers of plant samples at the time of inspection. Current on-site techniques require expensive equipment, considerable expertise and are not suited for plant...

  19. Defining Constellation Suit Helmet Field of View Requirements Employing a Mission Segment Based Reduction Process

    NASA Technical Reports Server (NTRS)

    McFarland, Shane

    2009-01-01

    Field of view has always been a design feature paramount to helmets, and in particular space suits, where the helmet must provide an adequate field of view for a large range of activities, environments, and body positions. For Project Constellation, a different approach to helmet requirement maturation was utilized; one that was less a direct function of body position and suit pressure and more a function of the mission segment in which the field of view will be required. Through taxonimization of various parameters that affect suited field of view, as well as consideration for possible nominal and contingency operations during that mission segment, a reduction process was employed to condense the large number of possible outcomes to only six unique field of view angle requirements that still captured all necessary variables while sacrificing minimal fidelity.

  20. Peptide arrays on cellulose support: SPOT synthesis, a time and cost efficient method for synthesis of large numbers of peptides in a parallel and addressable fashion.

    PubMed

    Hilpert, Kai; Winkler, Dirk F H; Hancock, Robert E W

    2007-01-01

    Peptide synthesis on cellulose using SPOT technology allows the parallel synthesis of large numbers of addressable peptides in small amounts. In addition, the cost per peptide is less than 1% of peptides synthesized conventionally on resin. The SPOT method follows standard fluorenyl-methoxy-carbonyl chemistry on conventional cellulose sheets, and can utilize more than 600 different building blocks. The procedure involves three phases: preparation of the cellulose membrane, stepwise coupling of the amino acids and cleavage of the side-chain protection groups. If necessary, peptides can be cleaved from the membrane for assays performed using soluble peptides. These features make this method an excellent tool for screening large numbers of peptides for many different purposes. Potential applications range from simple binding assays, to more sophisticated enzyme assays and studies with living microbes or cells. The time required to complete the protocol depends on the number and length of the peptides. For example, 400 9-mer peptides can be synthesized within 6 days.

  1. YBYRÁ facilitates comparison of large phylogenetic trees.

    PubMed

    Machado, Denis Jacob

    2015-07-01

    The number and size of tree topologies that are being compared by phylogenetic systematists is increasing due to technological advancements in high-throughput DNA sequencing. However, we still lack tools to facilitate comparison among phylogenetic trees with a large number of terminals. The "YBYRÁ" project integrates software solutions for data analysis in phylogenetics. It comprises tools for (1) topological distance calculation based on the number of shared splits or clades, (2) sensitivity analysis and automatic generation of sensitivity plots and (3) clade diagnoses based on different categories of synapomorphies. YBYRÁ also provides (4) an original framework to facilitate the search for potential rogue taxa based on how much they affect average matching split distances (using MSdist). YBYRÁ facilitates comparison of large phylogenetic trees and outperforms competing software in terms of usability and time efficiency, specially for large data sets. The programs that comprises this toolkit are written in Python, hence they do not require installation and have minimum dependencies. The entire project is available under an open-source licence at http://www.ib.usp.br/grant/anfibios/researchSoftware.html .

  2. Self-organization in a distributed coordination game through heuristic rules

    NASA Astrophysics Data System (ADS)

    Agarwal, Shubham; Ghosh, Diptesh; Chakrabarti, Anindya S.

    2016-12-01

    In this paper, we consider a distributed coordination game played by a large number of agents with finite information sets, which characterizes emergence of a single dominant attribute out of a large number of competitors. Formally, N agents play a coordination game repeatedly, which has exactly N pure strategy Nash equilibria, and all of the equilibria are equally preferred by the agents. The problem is to select one equilibrium out of N possible equilibria in the least number of attempts. We propose a number of heuristic rules based on reinforcement learning to solve the coordination problem. We see that the agents self-organize into clusters with varying intensities depending on the heuristic rule applied, although all clusters but one are transitory in most cases. Finally, we characterize a trade-off in terms of the time requirement to achieve a degree of stability in strategies versus the efficiency of such a solution.

  3. Assessing the convergence of LHS Monte Carlo simulations of wastewater treatment models.

    PubMed

    Benedetti, Lorenzo; Claeys, Filip; Nopens, Ingmar; Vanrolleghem, Peter A

    2011-01-01

    Monte Carlo (MC) simulation appears to be the only currently adopted tool to estimate global sensitivities and uncertainties in wastewater treatment modelling. Such models are highly complex, dynamic and non-linear, requiring long computation times, especially in the scope of MC simulation, due to the large number of simulations usually required. However, no stopping rule to decide on the number of simulations required to achieve a given confidence in the MC simulation results has been adopted so far in the field. In this work, a pragmatic method is proposed to minimize the computation time by using a combination of several criteria. It makes no use of prior knowledge about the model, is very simple, intuitive and can be automated: all convenient features in engineering applications. A case study is used to show an application of the method, and the results indicate that the required number of simulations strongly depends on the model output(s) selected, and on the type and desired accuracy of the analysis conducted. Hence, no prior indication is available regarding the necessary number of MC simulations, but the proposed method is capable of dealing with these variations and stopping the calculations after convergence is reached.

  4. Maximizing the Scientific Return of Low Cost Planetary Missions Using Solar Electric Propulsion(abstract)

    NASA Technical Reports Server (NTRS)

    Russell, C. T.; Metzger, A.; Pieters, C.; Elphic, R. C.; McCord, T.; Head, J.; Abshire, J.; Philips, R.; Sykes, M.; A'Hearn, M.; hide

    1994-01-01

    After many years of development, solar electric propulsion is now a practical low cost alternative for many planetary missions. In response to the recent Discovery AO, we and a number of colleagues have examined the scientific return from a missioon to map the Moon and then rendezvous with a small body. In planning this mission, we found that solar electric propulsion was quite affordable under the Discovery guidelines, that many targets could be reached more rapidly with solar electric propulsion than chemical propulsion, that a large number of planetary bodies were accessible with modest propulsion systems, and that such missions were quite adaptable, with generous launch windows which minimized mission risks. Moreover, solar electric propulsion is ideally suited for large payloads requiring a large amount of power.

  5. Large Data at Small Universities: Astronomical processing using a computer classroom

    NASA Astrophysics Data System (ADS)

    Fuller, Nathaniel James; Clarkson, William I.; Fluharty, Bill; Belanger, Zach; Dage, Kristen

    2016-06-01

    The use of large computing clusters for astronomy research is becoming more commonplace as datasets expand, but access to these required resources is sometimes difficult for research groups working at smaller Universities. As an alternative to purchasing processing time on an off-site computing cluster, or purchasing dedicated hardware, we show how one can easily build a crude on-site cluster by utilizing idle cycles on instructional computers in computer-lab classrooms. Since these computers are maintained as part of the educational mission of the University, the resource impact on the investigator is generally low.By using open source Python routines, it is possible to have a large number of desktop computers working together via a local network to sort through large data sets. By running traditional analysis routines in an “embarrassingly parallel” manner, gains in speed are accomplished without requiring the investigator to learn how to write routines using highly specialized methodology. We demonstrate this concept here applied to 1. photometry of large-format images and 2. Statistical significance-tests for X-ray lightcurve analysis. In these scenarios, we see a speed-up factor which scales almost linearly with the number of cores in the cluster. Additionally, we show that the usage of the cluster does not severely limit performance for a local user, and indeed the processing can be performed while the computers are in use for classroom purposes.

  6. FASTPM: a new scheme for fast simulations of dark matter and haloes

    NASA Astrophysics Data System (ADS)

    Feng, Yu; Chu, Man-Yat; Seljak, Uroš; McDonald, Patrick

    2016-12-01

    We introduce FASTPM, a highly scalable approximated particle mesh (PM) N-body solver, which implements the PM scheme enforcing correct linear displacement (1LPT) evolution via modified kick and drift factors. Employing a two-dimensional domain decomposing scheme, FASTPM scales extremely well with a very large number of CPUs. In contrast to Comoving-Lagrangian (COLA) approach, we do not require to split the force or track separately the 2LPT solution, reducing the code complexity and memory requirements. We compare FASTPM with different number of steps (Ns) and force resolution factor (B) against three benchmarks: halo mass function from friends-of-friends halo finder; halo and dark matter power spectrum; and cross-correlation coefficient (or stochasticity), relative to a high-resolution TREEPM simulation. We show that the modified time stepping scheme reduces the halo stochasticity when compared to COLA with the same number of steps and force resolution. While increasing Ns and B improves the transfer function and cross-correlation coefficient, for many applications FASTPM achieves sufficient accuracy at low Ns and B. For example, Ns = 10 and B = 2 simulation provides a substantial saving (a factor of 10) of computing time relative to Ns = 40, B = 3 simulation, yet the halo benchmarks are very similar at z = 0. We find that for abundance matched haloes the stochasticity remains low even for Ns = 5. FASTPM compares well against less expensive schemes, being only 7 (4) times more expensive than 2LPT initial condition generator for Ns = 10 (Ns = 5). Some of the applications where FASTPM can be useful are generating a large number of mocks, producing non-linear statistics where one varies a large number of nuisance or cosmological parameters, or serving as part of an initial conditions solver.

  7. A three-dimensional ground-water-flow model modified to reduce computer-memory requirements and better simulate confining-bed and aquifer pinchouts

    USGS Publications Warehouse

    Leahy, P.P.

    1982-01-01

    The Trescott computer program for modeling groundwater flow in three dimensions has been modified to (1) treat aquifer and confining bed pinchouts more realistically and (2) reduce the computer memory requirements needed for the input data. Using the original program, simulation of aquifer systems with nonrectangular external boundaries may result in a large number of nodes that are not involved in the numerical solution of the problem, but require computer storage. (USGS)

  8. Masked multichannel analyzer

    DOEpatents

    Winiecki, A.L.; Kroop, D.C.; McGee, M.K.; Lenkszus, F.R.

    1984-01-01

    An analytical instrument and particularly a time-of-flight-mass spectrometer for processing a large number of analog signals irregularly spaced over a spectrum, with programmable masking of portions of the spectrum where signals are unlikely in order to reduce memory requirements and/or with a signal capturing assembly having a plurality of signal capturing devices fewer in number than the analog signals for use in repeated cycles within the data processing time period.

  9. Masked multichannel analyzer

    DOEpatents

    Winiecki, Alan L.; Kroop, David C.; McGee, Marilyn K.; Lenkszus, Frank R.

    1986-01-01

    An analytical instrument and particularly a time-of-flight-mass spectrometer for processing a large number of analog signals irregularly spaced over a spectrum, with programmable masking of portions of the spectrum where signals are unlikely in order to reduce memory requirements and/or with a signal capturing assembly having a plurality of signal capturing devices fewer in number than the analog signals for use in repeated cycles within the data processing time period.

  10. Performance of ceramic superconductors in magnetic bearings

    NASA Technical Reports Server (NTRS)

    Kirtley, James L., Jr.; Downer, James R.

    1993-01-01

    Magnetic bearings are large-scale applications of magnet technology, quite similar in certain ways to synchronous machinery. They require substantial flux density over relatively large volumes of space. Large flux density is required to have satisfactory force density. Satisfactory dynamic response requires that magnetic circuit permeances not be too large, implying large air gaps. Superconductors, which offer large magnetomotive forces and high flux density in low permeance circuits, appear to be desirable in these situations. Flux densities substantially in excess of those possible with iron can be produced, and no ferromagnetic material is required. Thus the inductance of active coils can be made low, indicating good dynamic response of the bearing system. The principal difficulty in using superconductors is, of course, the deep cryogenic temperatures at which they must operate. Because of the difficulties in working with liquid helium, the possibility of superconductors which can be operated in liquid nitrogen is thought to extend the number and range of applications of superconductivity. Critical temperatures of about 98 degrees Kelvin were demonstrated in a class of materials which are, in fact, ceramics. Quite a bit of public attention was attracted to these new materials. There is a difficulty with the ceramic superconducting materials which were developed to date. Current densities sufficient for use in large-scale applications have not been demonstrated. In order to be useful, superconductors must be capable of carrying substantial currents in the presence of large magnetic fields. The possible use of ceramic superconductors in magnetic bearings is investigated and discussed and requirements that must be achieved by superconductors operating at liquid nitrogen temperatures to make their use comparable with niobium-titanium superconductors operating at liquid helium temperatures are identified.

  11. Mass Spectrometry Strategies for Clinical Metabolomics and Lipidomics in Psychiatry, Neurology, and Neuro-Oncology

    PubMed Central

    Wood, Paul L

    2014-01-01

    Metabolomics research has the potential to provide biomarkers for the detection of disease, for subtyping complex disease populations, for monitoring disease progression and therapy, and for defining new molecular targets for therapeutic intervention. These potentials are far from being realized because of a number of technical, conceptual, financial, and bioinformatics issues. Mass spectrometry provides analytical platforms that address the technical barriers to success in metabolomics research; however, the limited commercial availability of analytical and stable isotope standards has created a bottleneck for the absolute quantitation of a number of metabolites. Conceptual and financial factors contribute to the generation of statistically under-powered clinical studies, whereas bioinformatics issues result in the publication of a large number of unidentified metabolites. The path forward in this field involves targeted metabolomics analyses of large control and patient populations to define both the normal range of a defined metabolite and the potential heterogeneity (eg, bimodal) in complex patient populations. This approach requires that metabolomics research groups, in addition to developing a number of analytical platforms, build sufficient chemistry resources to supply the analytical standards required for absolute metabolite quantitation. Examples of metabolomics evaluations of sulfur amino-acid metabolism in psychiatry, neurology, and neuro-oncology and of lipidomics in neurology will be reviewed. PMID:23842599

  12. Mass spectrometry strategies for clinical metabolomics and lipidomics in psychiatry, neurology, and neuro-oncology.

    PubMed

    Wood, Paul L

    2014-01-01

    Metabolomics research has the potential to provide biomarkers for the detection of disease, for subtyping complex disease populations, for monitoring disease progression and therapy, and for defining new molecular targets for therapeutic intervention. These potentials are far from being realized because of a number of technical, conceptual, financial, and bioinformatics issues. Mass spectrometry provides analytical platforms that address the technical barriers to success in metabolomics research; however, the limited commercial availability of analytical and stable isotope standards has created a bottleneck for the absolute quantitation of a number of metabolites. Conceptual and financial factors contribute to the generation of statistically under-powered clinical studies, whereas bioinformatics issues result in the publication of a large number of unidentified metabolites. The path forward in this field involves targeted metabolomics analyses of large control and patient populations to define both the normal range of a defined metabolite and the potential heterogeneity (eg, bimodal) in complex patient populations. This approach requires that metabolomics research groups, in addition to developing a number of analytical platforms, build sufficient chemistry resources to supply the analytical standards required for absolute metabolite quantitation. Examples of metabolomics evaluations of sulfur amino-acid metabolism in psychiatry, neurology, and neuro-oncology and of lipidomics in neurology will be reviewed.

  13. The Population Tracking Model: A Simple, Scalable Statistical Model for Neural Population Data

    PubMed Central

    O'Donnell, Cian; alves, J. Tiago Gonç; Whiteley, Nick; Portera-Cailliau, Carlos; Sejnowski, Terrence J.

    2017-01-01

    Our understanding of neural population coding has been limited by a lack of analysis methods to characterize spiking data from large populations. The biggest challenge comes from the fact that the number of possible network activity patterns scales exponentially with the number of neurons recorded (∼2Neurons). Here we introduce a new statistical method for characterizing neural population activity that requires semi-independent fitting of only as many parameters as the square of the number of neurons, requiring drastically smaller data sets and minimal computation time. The model works by matching the population rate (the number of neurons synchronously active) and the probability that each individual neuron fires given the population rate. We found that this model can accurately fit synthetic data from up to 1000 neurons. We also found that the model could rapidly decode visual stimuli from neural population data from macaque primary visual cortex about 65 ms after stimulus onset. Finally, we used the model to estimate the entropy of neural population activity in developing mouse somatosensory cortex and, surprisingly, found that it first increases, and then decreases during development. This statistical model opens new options for interrogating neural population data and can bolster the use of modern large-scale in vivo Ca2+ and voltage imaging tools. PMID:27870612

  14. Experience with specifications applicable to certification. [of photovoltaic modules for large-scale application

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1982-01-01

    The Jet Propulsion Laboratory has developed a number of photovoltaic test and measurement specifications to guide the development of modules toward the requirements of future large-scale applications. Experience with these specifications and the extensive module measurement and testing that has accompanied their use is examined. Conclusions are drawn relative to three aspects of product certification: performance measurement, endurance testing and safety evaluation.

  15. High-Performance Reactive Particle Tracking with Adaptive Representation

    NASA Astrophysics Data System (ADS)

    Schmidt, M.; Benson, D. A.; Pankavich, S.

    2017-12-01

    Lagrangian particle tracking algorithms have been shown to be effective tools for modeling chemical reactions in imperfectly-mixed media. One disadvantage of these algorithms is the possible need to employ large numbers of particles in simulations, depending on the concentration covariance structure, and these large particle numbers can lead to long computation times. Two distinct approaches have recently arisen to overcome this. One method employs spatial kernels that are related to a specified, reduced particle number; however, over-wide kernels, dictated by a very low particle number, lead to an excess of reaction calculations and cause a reduction in performance. Another formulation involves hybrid particles that carry multiple species of reactant, wherein each particle is treated as its own well-mixed volume, obviating the need for large numbers of particles for each species but still requiring a fixed number of hybrid particles. Here, we combine these two approaches and demonstrate an improved method for simulating a given system in a computationally efficient manner. Additionally, the independent nature of transport and reaction calculations in this approach allows for significant gains via parallelization in an MPI or OpenMP context. For benchmarking, we choose a CO2 injection simulation with dissolution and precipitation of calcite and dolomite, allowing us to derive the proper treatment of interaction between solid and aqueous phases.

  16. Placement-aware decomposition of a digital standard cells library for double patterning lithography

    NASA Astrophysics Data System (ADS)

    Wassal, Amr G.; Sharaf, Heba; Hammouda, Sherif

    2012-11-01

    To continue scaling the circuit features down, Double Patterning (DP) technology is needed in 22nm technologies and lower. DP requires decomposing the layout features into two masks for pitch relaxation, such that the spacing between any two features on each mask is greater than the minimum allowed mask spacing. The relaxed pitches of each mask are then processed on two separate exposure steps. In many cases, post-layout decomposition fails to decompose the layout into two masks due to the presence of conflicts. Post-layout decomposition of a standard cells block can result in native conflicts inside the cells (internal conflict), or native conflicts on the boundary between two cells (boundary conflict). Resolving native conflicts requires a redesign and/or multiple iterations for the placement and routing phases to get a clean decomposition. Therefore, DP compliance must be considered in earlier phases, before getting the final placed cell block. The main focus of this paper is generating a library of decomposed standard cells to be used in a DP-aware placer. This library should contain all possible decompositions for each standard cell, i.e., these decompositions consider all possible combinations of boundary conditions. However, the large number of combinations of boundary conditions for each standard cell will significantly increase the processing time and effort required to obtain all possible decompositions. Therefore, an efficient methodology is required to reduce this large number of combinations. In this paper, three different reduction methodologies are proposed to reduce the number of different combinations processed to get the decomposed library. Experimental results show a significant reduction in the number of combinations and decompositions needed for the library processing. To generate and verify the proposed flow and methodologies, a prototype for a placement-aware DP-ready cell-library is developed with an optimized number of cell views.

  17. Wind-turbine-performance assessment

    NASA Astrophysics Data System (ADS)

    Vachon, W. A.

    1982-06-01

    An updated summary of recent test data and experiences is reported from both federally and privately funded large wind turbine (WT) development and test programs, and from key WT programs in Europe. Progress and experiences on both the cluster of three MOD-2 2.5-MW WT's, the MOD-1 2-MW WT, and other WT installations are described. An examination of recent test experiences and plans from approximately five privately funded large WT programs in the United States indicates that, during machine checkout and startup, technical problems are identified, which require and startup, a number of technical problems are identified, which will require design changes and create program delays.

  18. Some anomalies observed in wind-tunnel tests of a blunt body at transonic and supersonic speeds

    NASA Technical Reports Server (NTRS)

    Brooks, J. D.

    1976-01-01

    An investigation of anomalies observed in wind tunnel force tests of a blunt body configuration was conducted at Mach numbers from 0.20 to 1.35 in the Langley 8-foot transonic pressure tunnel and at Mach numbers of 1.50, 1,80, and 2.16 in the Langley Unitary Plan wind tunnel. At a Mach number of 1.35, large variations occurred in axial force coefficient at a given angle of attack. At transonic and low supersonic speeds, the total drag measured in the wind tunnel was much lower than that measured during earlier ballistic range tests. Accurate measurements of total drag for blunt bodies will require the use of models smaller than those tested thus far; however, it appears that accurate forebody drag results can be obtained by using relatively large models. Shock standoff distance is presented from experimental data over the Mach number range from 1.05 to 4.34. Theory accurately predicts the shock standoff distance at Mach numbers up to 1.75.

  19. The benefits of adaptive parametrization in multi-objective Tabu Search optimization

    NASA Astrophysics Data System (ADS)

    Ghisu, Tiziano; Parks, Geoffrey T.; Jaeggi, Daniel M.; Jarrett, Jerome P.; Clarkson, P. John

    2010-10-01

    In real-world optimization problems, large design spaces and conflicting objectives are often combined with a large number of constraints, resulting in a highly multi-modal, challenging, fragmented landscape. The local search at the heart of Tabu Search, while being one of its strengths in highly constrained optimization problems, requires a large number of evaluations per optimization step. In this work, a modification of the pattern search algorithm is proposed: this modification, based on a Principal Components' Analysis of the approximation set, allows both a re-alignment of the search directions, thereby creating a more effective parametrization, and also an informed reduction of the size of the design space itself. These changes make the optimization process more computationally efficient and more effective - higher quality solutions are identified in fewer iterations. These advantages are demonstrated on a number of standard analytical test functions (from the ZDT and DTLZ families) and on a real-world problem (the optimization of an axial compressor preliminary design).

  20. Methanol production from Eucalyptus wood chips. Working Document 2. Vegetative propagation of Eucalypts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fishkind, H.H.

    1982-04-01

    The feasibility of large-scale plantation establishment by various methods was examined, and the following conclusions were reached: seedling plantations are limited in potential yield due to genetic variation among the planting stock and often inadequate supplies of appropriate seed; vegetative propagation by rooted cuttings can provide good genetic uniformity of select hybrid planting stock; however, large-scale production requires establishment and maintenance of extensive cutting orchards. The collection of shoots and preparation of cuttings, although successfully implemented in the Congo and Brazil, would not be economically feasible in Florida for large-scale plantations; tissue culture propagation of select hybrid eucalypts offers themore » only opportunity to produce the very large number of trees required to establish the energy plantation. The cost of tissue culture propagation, although higher than seedling production, is more than off-set by the increased productivity of vegetative plantations established from select hybrid Eucalyptus.« less

  1. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    PubMed Central

    Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.

    2015-01-01

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211

  2. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    PubMed

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  3. High Throughput Exposure Estimation Using NHANES Data (SOT)

    EPA Science Inventory

    In the ExpoCast project, high throughput (HT) exposure models enable rapid screening of large numbers of chemicals for exposure potential. Evaluation of these models requires empirical exposure data and due to the paucity of human metabolism/exposure data such evaluations includ...

  4. A method of hidden Markov model optimization for use with geophysical data sets

    NASA Technical Reports Server (NTRS)

    Granat, R. A.

    2003-01-01

    Geophysics research has been faced with a growing need for automated techniques with which to process large quantities of data. A successful tool must meet a number of requirements: it should be consistent, require minimal parameter tuning, and produce scientifically meaningful results in reasonable time. We introduce a hidden Markov model (HMM)-based method for analysis of geophysical data sets that attempts to address these issues.

  5. Concentration of Enteroviruses, Adenoviruses, and Noroviruses from Drinking Water by Use of Glass Wool Filters▿

    PubMed Central

    Lambertini, Elisabetta; Spencer, Susan K.; Bertz, Phillip D.; Loge, Frank J.; Kieke, Burney A.; Borchardt, Mark A.

    2008-01-01

    Available filtration methods to concentrate waterborne viruses are either too costly for studies requiring large numbers of samples, limited to small sample volumes, or not very portable for routine field applications. Sodocalcic glass wool filtration is a cost-effective and easy-to-use method to retain viruses, but its efficiency and reliability are not adequately understood. This study evaluated glass wool filter performance to concentrate the four viruses on the U.S. Environmental Protection Agency contaminant candidate list, i.e., coxsackievirus, echovirus, norovirus, and adenovirus, as well as poliovirus. Total virus numbers recovered were measured by quantitative reverse transcription-PCR (qRT-PCR); infectious polioviruses were quantified by integrated cell culture (ICC)-qRT-PCR. Recovery efficiencies averaged 70% for poliovirus, 14% for coxsackievirus B5, 19% for echovirus 18, 21% for adenovirus 41, and 29% for norovirus. Virus strain and water matrix affected recovery, with significant interaction between the two variables. Optimal recovery was obtained at pH 6.5. No evidence was found that water volume, filtration rate, and number of viruses seeded influenced recovery. The method was successful in detecting indigenous viruses in municipal wells in Wisconsin. Long-term continuous filtration retained viruses sufficiently for their detection for up to 16 days after seeding for qRT-PCR and up to 30 days for ICC-qRT-PCR. Glass wool filtration is suitable for large-volume samples (1,000 liters) collected at high filtration rates (4 liters min−1), and its low cost makes it advantageous for studies requiring large numbers of samples. PMID:18359827

  6. Design of apochromatic lens with large field and high definition for machine vision.

    PubMed

    Yang, Ao; Gao, Xingyu; Li, Mingfeng

    2016-08-01

    Precise machine vision detection for a large object at a finite working distance (WD) requires that the lens has a high resolution for a large field of view (FOV). In this case, the effect of a secondary spectrum on image quality is not negligible. According to the detection requirements, a high resolution apochromatic objective is designed and analyzed. The initial optical structure (IOS) is combined with three segments. Next, the secondary spectrum of the IOS is corrected by replacing glasses using the dispersion vector analysis method based on the Buchdahl dispersion equation. Other aberrations are optimized by the commercial optical design software ZEMAX by properly choosing the optimization function operands. The optimized optical structure (OOS) has an f-number (F/#) of 3.08, a FOV of φ60  mm, a WD of 240 mm, and a modulated transfer function (MTF) of all fields of more than 0.1 at 320  cycles/mm. The design requirements for a nonfluorite material apochromatic objective lens with a large field and high definition for machine vision detection have been achieved.

  7. Scalable subsurface inverse modeling of huge data sets with an application to tracer concentration breakthrough data from magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.; Werth, Charles J.; Valocchi, Albert J.

    2016-07-01

    Characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydrogeophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with "big data" processing and numerous large-scale numerical simulations. To tackle such difficulties, the principal component geostatistical approach (PCGA) has been proposed as a "Jacobian-free" inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed in the traditional inversion methods. PCGA can be conveniently linked to any multiphysics simulation software with independent parallel executions. In this paper, we extend PCGA to handle a large number of measurements (e.g., 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data were compressed by the zeroth temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Only about 2000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method.

  8. Innate or Acquired? - Disentangling Number Sense and Early Number Competencies.

    PubMed

    Siemann, Julia; Petermann, Franz

    2018-01-01

    The clinical profile termed developmental dyscalculia (DD) is a fundamental disability affecting children already prior to arithmetic schooling, but the formal diagnosis is often only made during school years. The manifold associated deficits depend on age, education, developmental stage, and task requirements. Despite a large body of studies, the underlying mechanisms remain dubious. Conflicting findings have stimulated opposing theories, each presenting enough empirical support to remain a possible alternative. A so far unresolved question concerns the debate whether a putative innate number sense is required for successful arithmetic achievement as opposed to a pure reliance on domain-general cognitive factors. Here, we outline that the controversy arises due to ambiguous conceptualizations of the number sense. It is common practice to use early number competence as a proxy for innate magnitude processing, even though it requires knowledge of the number system. Therefore, such findings reflect the degree to which quantity is successfully transferred into symbols rather than informing about quantity representation per se . To solve this issue, we propose a three-factor account and incorporate it into the partly overlapping suggestions in the literature regarding the etiology of different DD profiles. The proposed view on DD is especially beneficial because it is applicable to more complex theories identifying a conglomerate of deficits as underlying cause of DD.

  9. Innate or Acquired? – Disentangling Number Sense and Early Number Competencies

    PubMed Central

    Siemann, Julia; Petermann, Franz

    2018-01-01

    The clinical profile termed developmental dyscalculia (DD) is a fundamental disability affecting children already prior to arithmetic schooling, but the formal diagnosis is often only made during school years. The manifold associated deficits depend on age, education, developmental stage, and task requirements. Despite a large body of studies, the underlying mechanisms remain dubious. Conflicting findings have stimulated opposing theories, each presenting enough empirical support to remain a possible alternative. A so far unresolved question concerns the debate whether a putative innate number sense is required for successful arithmetic achievement as opposed to a pure reliance on domain-general cognitive factors. Here, we outline that the controversy arises due to ambiguous conceptualizations of the number sense. It is common practice to use early number competence as a proxy for innate magnitude processing, even though it requires knowledge of the number system. Therefore, such findings reflect the degree to which quantity is successfully transferred into symbols rather than informing about quantity representation per se. To solve this issue, we propose a three-factor account and incorporate it into the partly overlapping suggestions in the literature regarding the etiology of different DD profiles. The proposed view on DD is especially beneficial because it is applicable to more complex theories identifying a conglomerate of deficits as underlying cause of DD. PMID:29725316

  10. Recursive Gradient Estimation Using Splines for Navigation of Autonomous Vehicles.

    DTIC Science & Technology

    1985-07-01

    AUTONOMOUS VEHICLES C. N. SHEN DTIC " JULY 1985 SEP 1 219 85 V US ARMY ARMAMENT RESEARCH AND DEVELOPMENT CENTER LARGE CALIBER WEAPON SYSTEMS LABORATORY I...GRADIENT ESTIMATION USING SPLINES FOR NAVIGATION OF AUTONOMOUS VEHICLES Final S. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(q) 8. CONTRACT OR GRANT NUMBER...which require autonomous vehicles . Essential to these robotic vehicles is an adequate and efficient computer vision system. A potentially more

  11. Translations on Eastern Europe, Political, Sociological, and Military Affairs, Number 1404-A

    DTIC Science & Technology

    1977-06-22

    Personality Development in Light of Technological Progress (Harry Nick; EINHEIT, Apr 77) 128 Significance of National Culture in Socialism...industrialization in such a way as to create "labor intensive" technologies in the Hungarian regions, preferably those which require low levels of training... technology , as well as the large number of well-paid personnel create excellent con- ditions for the implementation of the plans of the ideological

  12. Large polar pretilt for the liquid crystal homologous series alkylcyanobiphenyl

    NASA Astrophysics Data System (ADS)

    Huang, Zhibin; Rosenblatt, Charles

    2005-01-01

    Sufficiently strong rubbing of the polyimide alignment layer SE-1211 (Nissan Chemical Industries, Ltd.) results in a large pretilt of the liquid crystal director from the homeotropic orientation. The threshold rubbing strength required to induce nonzero pretilt is found to be a monotonic function of the number of methylene units in the homologous liquid crystal series alkylcyanobiphenyl. The results are discussed in terms of the dual easy axis model for alignment.

  13. Metabolic constraint imposes tradeoff between body size and number of brain neurons in human evolution

    PubMed Central

    Fonseca-Azevedo, Karina; Herculano-Houzel, Suzana

    2012-01-01

    Despite a general trend for larger mammals to have larger brains, humans are the primates with the largest brain and number of neurons, but not the largest body mass. Why are great apes, the largest primates, not also those endowed with the largest brains? Recently, we showed that the energetic cost of the brain is a linear function of its numbers of neurons. Here we show that metabolic limitations that result from the number of hours available for feeding and the low caloric yield of raw foods impose a tradeoff between body size and number of brain neurons, which explains the small brain size of great apes compared with their large body size. This limitation was probably overcome in Homo erectus with the shift to a cooked diet. Absent the requirement to spend most available hours of the day feeding, the combination of newly freed time and a large number of brain neurons affordable on a cooked diet may thus have been a major positive driving force to the rapid increased in brain size in human evolution. PMID:23090991

  14. Metabolic constraint imposes tradeoff between body size and number of brain neurons in human evolution.

    PubMed

    Fonseca-Azevedo, Karina; Herculano-Houzel, Suzana

    2012-11-06

    Despite a general trend for larger mammals to have larger brains, humans are the primates with the largest brain and number of neurons, but not the largest body mass. Why are great apes, the largest primates, not also those endowed with the largest brains? Recently, we showed that the energetic cost of the brain is a linear function of its numbers of neurons. Here we show that metabolic limitations that result from the number of hours available for feeding and the low caloric yield of raw foods impose a tradeoff between body size and number of brain neurons, which explains the small brain size of great apes compared with their large body size. This limitation was probably overcome in Homo erectus with the shift to a cooked diet. Absent the requirement to spend most available hours of the day feeding, the combination of newly freed time and a large number of brain neurons affordable on a cooked diet may thus have been a major positive driving force to the rapid increased in brain size in human evolution.

  15. An Adaptive QSE-reduced Nuclear Reaction Network for Silicon Burning

    NASA Astrophysics Data System (ADS)

    Parete-Koon, Suzanne; Hix, William Raphael; Thielemann, Friedrich-Karl

    2010-02-01

    The nuclei of the ``iron peak'' are formed late in the evolution of massive stars and during supernovae. Silicon burning during these events is responsible for the production of a wide range of nuclei with atomic mass numbers from 28 to 64. The large number of nuclei involved make accurate modeling of silicon burning computationally expensive. Examination of the physics of silicon burning reveals that the nuclear evolution is dominated by large groups of nuclei in mutual equilibrium. We present an improvement on our hybrid equilibrium-network scheme that takes advantage of this quasi-equilibrium (QSE) to reduce the number of independent variables calculated. Because the membership and number of these groups vary as the temperature, density and electron faction change, achieving maximal efficiency requires dynamic adjustment of group number and membership. The resultant QSE-reduced network is up to 20 times faster than the full network it replaces without significant loss of accuracy. These reductions in computational cost and the number of species evolved make QSE-reduced networks well suited for inclusion within hydrodynamic simulations, particularly in multi-dimensional applications. )

  16. Who Will Teach Montana's Children?

    ERIC Educational Resources Information Center

    Nielson, Dori Burns

    Montana is experiencing three types of teacher shortages, each requiring different intervention strategies. These situations include shortages in specific subject areas, most notably in music, special education, and foreign languages, followed closely by guidance and library; many job openings, caused by rapid enrollment growth, a large number of…

  17. COSTS AND ISSUES RELATED TO REMEDIATION OF PETROLEUM-CONTAMINATED SITES

    EPA Science Inventory

    The remediation costs required at sites contaminated with petroleum-derived compounds remains a relevant issue because of the large number of existing underground storage tanks the United States and the presence of benzene, MTBE, and TBA in some drinking water supplies. Cost inf...

  18. Parameterized examination in econometrics

    NASA Astrophysics Data System (ADS)

    Malinova, Anna; Kyurkchiev, Vesselin; Spasov, Georgi

    2018-01-01

    The paper presents a parameterization of basic types of exam questions in Econometrics. This algorithm is used to automate and facilitate the process of examination, assessment and self-preparation of a large number of students. The proposed parameterization of testing questions reduces the time required to author tests and course assignments. It enables tutors to generate a large number of different but equivalent dynamic questions (with dynamic answers) on a certain topic, which are automatically assessed. The presented methods are implemented in DisPeL (Distributed Platform for e-Learning) and provide questions in the areas of filtering and smoothing of time-series data, forecasting, building and analysis of single-equation econometric models. Questions also cover elasticity, average and marginal characteristics, product and cost functions, measurement of monopoly power, supply, demand and equilibrium price, consumer and product surplus, etc. Several approaches are used to enable the required numerical computations in DisPeL - integration of third-party mathematical libraries, developing our own procedures from scratch, and wrapping our legacy math codes in order to modernize and reuse them.

  19. Optimal processor assignment for pipeline computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath

    1991-01-01

    The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.

  20. Large constraint length high speed viterbi decoder based on a modular hierarchial decomposition of the deBruijn graph

    NASA Technical Reports Server (NTRS)

    Collins, Oliver (Inventor); Dolinar, Jr., Samuel J. (Inventor); Hus, In-Shek (Inventor); Bozzola, Fabrizio P. (Inventor); Olson, Erlend M. (Inventor); Statman, Joseph I. (Inventor); Zimmerman, George A. (Inventor)

    1991-01-01

    A method of formulating and packaging decision-making elements into a long constraint length Viterbi decoder which involves formulating the decision-making processors as individual Viterbi butterfly processors that are interconnected in a deBruijn graph configuration. A fully distributed architecture, which achieves high decoding speeds, is made feasible by novel wiring and partitioning of the state diagram. This partitioning defines universal modules, which can be used to build any size decoder, such that a large number of wires is contained inside each module, and a small number of wires is needed to connect modules. The total system is modular and hierarchical, and it implements a large proportion of the required wiring internally within modules and may include some external wiring to fully complete the deBruijn graph. pg,14.

  1. Features of MCNP6 Relevant to Medical Radiation Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, H. Grady III; Goorley, John T.

    2012-08-29

    MCNP (Monte Carlo N-Particle) is a general-purpose Monte Carlo code for simulating the transport of neutrons, photons, electrons, positrons, and more recently other fundamental particles and heavy ions. Over many years MCNP has found a wide range of applications in many different fields, including medical radiation physics. In this presentation we will describe and illustrate a number of significant recently-developed features in the current version of the code, MCNP6, having particular utility for medical physics. Among these are major extensions of the ability to simulate large, complex geometries, improvement in memory requirements and speed for large lattices, introduction of mesh-basedmore » isotopic reaction tallies, advances in radiography simulation, expanded variance-reduction capabilities, especially for pulse-height tallies, and a large number of enhancements in photon/electron transport.« less

  2. Multiplex titration RT-PCR: rapid determination of gene expression patterns for a large number of genes

    NASA Technical Reports Server (NTRS)

    Nebenfuhr, A.; Lomax, T. L.

    1998-01-01

    We have developed an improved method for determination of gene expression levels with RT-PCR. The procedure is rapid and does not require extensive optimization or densitometric analysis. Since the detection of individual transcripts is PCR-based, small amounts of tissue samples are sufficient for the analysis of expression patterns in large gene families. Using this method, we were able to rapidly screen nine members of the Aux/IAA family of auxin-responsive genes and identify those genes which vary in message abundance in a tissue- and light-specific manner. While not offering the accuracy of conventional semi-quantitative or competitive RT-PCR, our method allows quick screening of large numbers of genes in a wide range of RNA samples with just a thermal cycler and standard gel analysis equipment.

  3. Using Protein Dimers to Maximize the Protein Hybridization Efficiency with Multisite DNA Origami Scaffolds

    PubMed Central

    Verma, Vikash; Mallik, Leena; Hariadi, Rizal F.; Sivaramakrishnan, Sivaraj; Skiniotis, Georgios; Joglekar, Ajit P.

    2015-01-01

    DNA origami provides a versatile platform for conducting ‘architecture-function’ analysis to determine how the nanoscale organization of multiple copies of a protein component within a multi-protein machine affects its overall function. Such analysis requires that the copy number of protein molecules bound to the origami scaffold exactly matches the desired number, and that it is uniform over an entire scaffold population. This requirement is challenging to satisfy for origami scaffolds with many protein hybridization sites, because it requires the successful completion of multiple, independent hybridization reactions. Here, we show that a cleavable dimerization domain on the hybridizing protein can be used to multiplex hybridization reactions on an origami scaffold. This strategy yields nearly 100% hybridization efficiency on a 6-site scaffold even when using low protein concentration and short incubation time. It can also be developed further to enable reliable patterning of a large number of molecules on DNA origami for architecture-function analysis. PMID:26348722

  4. Analytics for vaccine economics and pricing: insights and observations.

    PubMed

    Robbins, Matthew J; Jacobson, Sheldon H

    2015-04-01

    Pediatric immunization programs in the USA are a successful and cost-effective public health endeavor, profoundly reducing mortalities caused by infectious diseases. Two important issues relate to the success of the immunization programs, the selection of cost-effective vaccines and the appropriate pricing of vaccines. The recommended childhood immunization schedule, published annually by the CDC, continues to expand with respect to the number of injections required and the number of vaccines available for selection. The advent of new vaccines to meet the growing requirements of the schedule results: in a large, combinatorial number of possible vaccine formularies. The expansion of the schedule and the increase in the number of available vaccines constitutes a challenge for state health departments, large city immunization programs, private practices and other vaccine purchasers, as a cost-effective vaccine formulary must be selected from an increasingly large set of possible vaccine combinations to satisfy the schedule. The pediatric vaccine industry consists of a relatively small number of pharmaceutical firms engaged in the research, development, manufacture and distribution of pediatric vaccines. The number of vaccine manufacturers has dramatically decreased in the past few decades for a myriad of reasons, most notably due to low profitability. The contraction of the industry negatively impacts the reliable provision of pediatric vaccines. The determination of appropriate vaccine prices is an important issue and influences a vaccine manufacturer's decision to remain in the market. Operations research is a discipline that applies advanced analytical methods to improve decision making; analytics is the application of operations research to a particular problem using pertinent data to provide a practical result. Analytics provides a mechanism to resolve the challenges facing stakeholders in the vaccine development and delivery system, in particular, the selection of cost-effective vaccines and the appropriate pricing of vaccines. A review of applicable analytics papers is provided.

  5. Dual-Level Method for Estimating Multistructural Partition Functions with Torsional Anharmonicity.

    PubMed

    Bao, Junwei Lucas; Xing, Lili; Truhlar, Donald G

    2017-06-13

    For molecules with multiple torsions, an accurate evaluation of the molecular partition function requires consideration of multiple structures and their torsional-potential anharmonicity. We previously developed a method called MS-T for this problem, and it requires an exhaustive conformational search with frequency calculations for all the distinguishable conformers; this can become expensive for molecules with a large number of torsions (and hence a large number of structures) if it is carried out with high-level methods. In the present work, we propose a cost-effective method to approximate the MS-T partition function when there are a large number of structures, and we test it on a transition state that has eight torsions. This new method is a dual-level method that combines an exhaustive conformer search carried out by a low-level electronic structure method (for instance, AM1, which is very inexpensive) and selected calculations with a higher-level electronic structure method (for example, density functional theory with a functional that is suitable for conformational analysis and thermochemistry). To provide a severe test of the new method, we consider a transition state structure that has 8 torsional degrees of freedom; this transition state structure is formed along one of the reaction pathways of the hydrogen abstraction reaction (at carbon-1) of ketohydroperoxide (KHP; its IUPAC name is 4-hydroperoxy-2-pentanone) by OH radical. We find that our proposed dual-level method is able to significantly reduce the computational cost for computing MS-T partition functions for this test case with a large number of torsions and with a large number of conformers because we carry out high-level calculations for only a fraction of the distinguishable conformers found by the low-level method. In the example studied here, the dual-level method with 40 high-level optimizations (1.8% of the number of optimizations in a coarse-grained full search and 0.13% of the number of optimizations in a fine-grained full search) reproduces the full calculation of the high-level partition function within a factor of 1.0 to 2.0 from 200 to 1000 K. The error in the dual-level method can be further reduced to factors of 0.6 to 1.1 over the whole temperature interval from 200 to 2400 K by optimizing 128 structures (5.9% of the number of optimizations in a fine-grained full search and 0.41% of the number of optimizations in a fine-grained full search). These factor-of-two or better errors are small compared to errors up to a factor of 1.0 × 10 3 if one neglects multistructural effects for the case under study.

  6. Parameter estimation in large-scale systems biology models: a parallel and self-adaptive cooperative strategy.

    PubMed

    Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R

    2017-01-21

    The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.

  7. Convective stability in the Rayleigh-Benard and directional solidification problems - High-frequency gravity modulation

    NASA Technical Reports Server (NTRS)

    Wheeler, A. A.; Mcfadden, G. B.; Murray, B. T.; Coriell, S. R.

    1991-01-01

    The effect of vertical, sinusoidal, time-dependent gravitational acceleration on the onset of solutal convection during directional solidification is analyzed in the limit of large modulation frequency. When the unmodulated state is unstable, the modulation amplitude required to stabilize the system is determined by the method of averaging. When the unmodulated state is stable, resonant modes of instability occur at large modulation amplitude. These are analyzed using matched asymptotic expansions to elucidate the boundary-layer structure for both the Rayleigh-Benard and directional solidification configurations. Based on these analyses, a thorough examination of the dependence of the stability criteria on the unmodulated Rayleigh number, Schmidt number, and distribution coefficient, is carried out.

  8. Advanced optical sensing and processing technologies for the distributed control of large flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Williams, G. M.; Fraser, J. C.

    1991-01-01

    The objective was to examine state-of-the-art optical sensing and processing technology applied to control the motion of flexible spacecraft. Proposed large flexible space systems, such an optical telescopes and antennas, will require control over vast surfaces. Most likely distributed control will be necessary involving many sensors to accurately measure the surface. A similarly large number of actuators must act upon the system. The used technical approach included reviewing proposed NASA missions to assess system needs and requirements. A candidate mission was chosen as a baseline study spacecraft for comparison of conventional and optical control components. Control system requirements of the baseline system were used for designing both a control system containing current off-the-shelf components and a system utilizing electro-optical devices for sensing and processing. State-of-the-art surveys of conventional sensor, actuator, and processor technologies were performed. A technology development plan is presented that presents a logical, effective way to develop and integrate advancing technologies.

  9. The issue of FM to AM conversion on the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Browning, D F; Rothenberg, J E; Wilcox, R B

    1998-08-13

    The National Ignition Facility (NIF) baseline configuration for inertial confinement fusion requires phase modulation for two purposes. First, ~ 1Å of frequency modulation (FM) bandwidth at low modulation frequency is required to suppress buildup of Stimulated Brioullin Scattering (SBS) in the large aperture laser optics. Also ~ 3 Å or more bandwidth at high modulation frequency is required for smoothing of the speckle pattern illuminating the target by the smoothing by spectral dispersion method (SSD). Ideally, imposition of bandwidth by pure phase modulation does not affect the beam intensity. However, as a result of a large number of effects, themore » FM converts to amplitude modulation (AM). In general this adversely affects the laser performance, e.g. by reducing the margin against damage to the optics. In particular, very large conversion of FM to AM has been observed in the NIF all-fiber master oscillator and distribution systems. The various mechanisms leading to AM are analyzed and approaches to minimizing their effects are discussed.« less

  10. Drinking from the Fire Hose: Why the Flight Management System Can Be Hard to Train and Difficult to Use

    NASA Technical Reports Server (NTRS)

    Sherry, Lance; Feary, Michael; Polson, Peter; Fennell, Karl

    2003-01-01

    The Flight Management Computer (FMC) and its interface, the Multi-function Control and Display Unit (MCDU) have been identified by researchers and airlines as difficult to train and use. Specifically, airline pilots have described the "drinking from the fire-hose" effect during training. Previous research has identified memorized action sequences as a major factor in a user s ability to learn and operate complex devices. This paper discusses the use of a method to examine the quantity of memorized action sequences required to perform a sample of 102 tasks, using features of the Boeing 777 Flight Management Computer Interface. The analysis identified a large number of memorized action sequences that must be learned during training and then recalled during line operations. Seventy-five percent of the tasks examined require recall of at least one memorized action sequence. Forty-five percent of the tasks require recall of a memorized action sequence and occur infrequently. The large number of memorized action sequences may provide an explanation for the difficulties in training and usage of the automation. Based on these findings, implications for training and the design of new user-interfaces are discussed.

  11. High performance photonic ADC for space applications

    NASA Astrophysics Data System (ADS)

    Pantoja, S.; Piqueras, M. A.; Villalba, P.; Martínez, B.; Rico, E.

    2017-11-01

    The flexibility required for future telecom payloads will require of more digital processing capabilities, moving from conventional analogue repeaters to more advanced and efficient analog subsystems or DSPbased solutions. Aggregate data throughputs will have to be handled onboard, creating the need for effective, ADC/DSP and DSP/DAC high speed links. Broadband payloads will have to receive, route and retransmit hundreds of channels and need to be designed so as to meet such requirements of larger bandwidth, system transparency and flexibility.[1][2] One important device in these new architectures is analog to digital converter (ADC) and its equivalent digital to analog converter (DAC). These will be the in/out interface for the use of digital processing in order to provide flexible beam to beam connectivity and variable bandwidth allocation. For telecom payloads having a large number of feeds and thus a large number of converters the mass and consumption of the mixer stage has become significant. Moreover, the inclusion of ADCs in the payload presents new trade-offs in design (jitter, quantization noise, ambiguity). This paper deals with an alternative solution of these two main problems with the exploitation of photonic techniques.

  12. The future of large old trees in urban landscapes.

    PubMed

    Le Roux, Darren S; Ikin, Karen; Lindenmayer, David B; Manning, Adrian D; Gibbons, Philip

    2014-01-01

    Large old trees are disproportionate providers of structural elements (e.g. hollows, coarse woody debris), which are crucial habitat resources for many species. The decline of large old trees in modified landscapes is of global conservation concern. Once large old trees are removed, they are difficult to replace in the short term due to typically prolonged time periods needed for trees to mature (i.e. centuries). Few studies have investigated the decline of large old trees in urban landscapes. Using a simulation model, we predicted the future availability of native hollow-bearing trees (a surrogate for large old trees) in an expanding city in southeastern Australia. In urban greenspace, we predicted that the number of hollow-bearing trees is likely to decline by 87% over 300 years under existing management practices. Under a worst case scenario, hollow-bearing trees may be completely lost within 115 years. Conversely, we predicted that the number of hollow-bearing trees will likely remain stable in semi-natural nature reserves. Sensitivity analysis revealed that the number of hollow-bearing trees perpetuated in urban greenspace over the long term is most sensitive to the: (1) maximum standing life of trees; (2) number of regenerating seedlings ha(-1); and (3) rate of hollow formation. We tested the efficacy of alternative urban management strategies and found that the only way to arrest the decline of large old trees requires a collective management strategy that ensures: (1) trees remain standing for at least 40% longer than currently tolerated lifespans; (2) the number of seedlings established is increased by at least 60%; and (3) the formation of habitat structures provided by large old trees is accelerated by at least 30% (e.g. artificial structures) to compensate for short term deficits in habitat resources. Immediate implementation of these recommendations is needed to avert long term risk to urban biodiversity.

  13. The Future of Large Old Trees in Urban Landscapes

    PubMed Central

    Le Roux, Darren S.; Ikin, Karen; Lindenmayer, David B.; Manning, Adrian D.; Gibbons, Philip

    2014-01-01

    Large old trees are disproportionate providers of structural elements (e.g. hollows, coarse woody debris), which are crucial habitat resources for many species. The decline of large old trees in modified landscapes is of global conservation concern. Once large old trees are removed, they are difficult to replace in the short term due to typically prolonged time periods needed for trees to mature (i.e. centuries). Few studies have investigated the decline of large old trees in urban landscapes. Using a simulation model, we predicted the future availability of native hollow-bearing trees (a surrogate for large old trees) in an expanding city in southeastern Australia. In urban greenspace, we predicted that the number of hollow-bearing trees is likely to decline by 87% over 300 years under existing management practices. Under a worst case scenario, hollow-bearing trees may be completely lost within 115 years. Conversely, we predicted that the number of hollow-bearing trees will likely remain stable in semi-natural nature reserves. Sensitivity analysis revealed that the number of hollow-bearing trees perpetuated in urban greenspace over the long term is most sensitive to the: (1) maximum standing life of trees; (2) number of regenerating seedlings ha−1; and (3) rate of hollow formation. We tested the efficacy of alternative urban management strategies and found that the only way to arrest the decline of large old trees requires a collective management strategy that ensures: (1) trees remain standing for at least 40% longer than currently tolerated lifespans; (2) the number of seedlings established is increased by at least 60%; and (3) the formation of habitat structures provided by large old trees is accelerated by at least 30% (e.g. artificial structures) to compensate for short term deficits in habitat resources. Immediate implementation of these recommendations is needed to avert long term risk to urban biodiversity. PMID:24941258

  14. Impact of large field angles on the requirements for deformable mirror in imaging satellites

    NASA Astrophysics Data System (ADS)

    Kim, Jae Jun; Mueller, Mark; Martinez, Ty; Agrawal, Brij

    2018-04-01

    For certain imaging satellite missions, a large aperture with wide field-of-view is needed. In order to achieve diffraction limited performance, the mirror surface Root Mean Square (RMS) error has to be less than 0.05 waves. In the case of visible light, it has to be less than 30 nm. This requirement is difficult to meet as the large aperture will need to be segmented in order to fit inside a launch vehicle shroud. To reduce this requirement and to compensate for the residual wavefront error, Micro-Electro-Mechanical System (MEMS) deformable mirrors can be considered in the aft optics of the optical system. MEMS deformable mirrors are affordable and consume low power, but are small in size. Due to the major reduction in pupil size for the deformable mirror, the effective field angle is magnified by the diameter ratio of the primary and deformable mirror. For wide field of view imaging, the required deformable mirror correction is field angle dependant, impacting the required parameters of a deformable mirror such as size, number of actuators, and actuator stroke. In this paper, a representative telescope and deformable mirror system model is developed and the deformable mirror correction is simulated to study the impact of the large field angles in correcting a wavefront error using a deformable mirror in the aft optics.

  15. Wind-tunnel/flight correlation study of aerodynamic characteristics of a large flexible supersonic cruise airplane (XB-70-1). 3: A comparison between characteristics predicted from wind-tunnel measurements and those measured in flight

    NASA Technical Reports Server (NTRS)

    Arnaiz, H. H.; Peterson, J. B., Jr.; Daugherty, J. C.

    1980-01-01

    A program was undertaken by NASA to evaluate the accuracy of a method for predicting the aerodynamic characteristics of large supersonic cruise airplanes. This program compared predicted and flight-measured lift, drag, angle of attack, and control surface deflection for the XB-70-1 airplane for 14 flight conditions with a Mach number range from 0.76 to 2.56. The predictions were derived from the wind-tunnel test data of a 0.03-scale model of the XB-70-1 airplane fabricated to represent the aeroelastically deformed shape at a 2.5 Mach number cruise condition. Corrections for shape variations at the other Mach numbers were included in the prediction. For most cases, differences between predicted and measured values were within the accuracy of the comparison. However, there were significant differences at transonic Mach numbers. At a Mach number of 1.06 differences were as large as 27 percent in the drag coefficients and 20 deg in the elevator deflections. A brief analysis indicated that a significant part of the difference between drag coefficients was due to the incorrect prediction of the control surface deflection required to trim the airplane.

  16. Cloud computing for genomic data analysis and collaboration.

    PubMed

    Langmead, Ben; Nellore, Abhinav

    2018-04-01

    Next-generation sequencing has made major strides in the past decade. Studies based on large sequencing data sets are growing in number, and public archives for raw sequencing data have been doubling in size every 18 months. Leveraging these data requires researchers to use large-scale computational resources. Cloud computing, a model whereby users rent computers and storage from large data centres, is a solution that is gaining traction in genomics research. Here, we describe how cloud computing is used in genomics for research and large-scale collaborations, and argue that its elasticity, reproducibility and privacy features make it ideally suited for the large-scale reanalysis of publicly available archived data, including privacy-protected data.

  17. Large Deployable Reflector Technologies for Future European Telecom and Earth Observation Missions

    NASA Astrophysics Data System (ADS)

    Ihle, A.; Breunig, E.; Dadashvili, L.; Migliorelli, M.; Scialino, L.; van't Klosters, K.; Santiago-Prowald, J.

    2012-07-01

    This paper presents requirements, analysis and design results for European large deployable reflectors (LDR) for space applications. For telecommunications, the foreseeable use of large reflectors is associated to the continuous demand for improved performance of mobile services. On the other hand, several earth observation (EO) missions can be identified carrying either active or passive remote sensing instruments (or both), in which a large effective aperture is needed e.g. BIOMASS. From the European point of view there is a total dependence of USA industry as such LDRs are not available from European suppliers. The RESTEO study is part of a number of ESA led activities to facilitate European LDR development. This paper is focused on the structural-mechanical aspects of this study. We identify the general requirements for LDRs with special emphasis on launcher accommodation for EO mission. In the next step, optimal concepts for the LDR structure and the RF-Surface are reviewed. Regarding the RF surface, both, a knitted metal mesh and a shell membrane based on carbon fibre reinforced silicon (CFRS) are considered. In terms of the backing structure, the peripheral ring concept is identified as most promising and a large number of options for the deployment kinematics are discussed. Of those, pantographic kinematics and a conical peripheral ring are selected. A preliminary design for these two most promising LDR concepts is performed which includes static, modal and kinematic simulation and also techniques to generate the reflector nets.

  18. Integration of chemical-specific exposure and pharmacokinetic considerations with the chemical-agnostic adverse outcome pathway framework

    EPA Science Inventory

    Traditional toxicity testing provides insight into the mechanisms underlying toxicological responses but requires a high investment in large numbers of resources. The new paradigm of testing approaches involves rapid screening of thousands of chemicals across hundreds of biologic...

  19. IN VITRO ASSESSMENT OF DEVELOPMENTAL NEUROTOXICITY: USE OF MICROELECTRODE ARRAYS TO MEASURE FUNCTIONAL CHANGES IN NEURONAL NETWORK ONTOGENY

    EPA Science Inventory

    Because the Developmental Neurotoxicity Testing Battery requires large numbers of animals and is expensive, development of in vitro approaches to screen chemicals for potential developmental neurotoxicity is a high priority. Many proposed approaches for screening are biochemical,...

  20. In Vitro Assessment of Developmental Neurotoxicity: Use of Microelectrode Arrays to Measure Functional Changes in Neuronal Network Ontogeny*

    EPA Science Inventory

    Because the Developmental Neurotoxicity Testing Guidelines require large numbers of animals and is expensive, development of in vitro approaches to screen chemicals for potential developmental neurotoxicity is a high priority. Many proposed approaches for screening are biochemica...

  1. Interspecies Correlation Estimation (ICE) models predict supplemental toxicity data for SSDs

    EPA Science Inventory

    Species sensitivity distributions (SSD) require a large number of toxicity values for a diversity of taxa to define a hazard level protective of multiple species. For most chemicals, measured toxicity data are limited to a few standard test species that are unlikely to adequately...

  2. ENDOCRINE DISRUPTING CHEMICAL EMISSIONS FROM COMBUSTION SOURCES: DIESEL PARTICULATE EMISSIONS AND DOMESTIC WASTE OPEN BURN EMISSIONS

    EPA Science Inventory

    Emissions of endocrine disrupting chemicals (EDCs) from combustion sources are poorly characterized due to the large number of compounds present in the emissions, the complexity of the analytical separations required, and the uncertainty regarding identification of chemicals with...

  3. Functional Network Architecture of Reading-Related Regions across Development

    ERIC Educational Resources Information Center

    Vogel, Alecia C.; Church, Jessica A.; Power, Jonathan D.; Miezin, Fran M.; Petersen, Steven E.; Schlaggar, Bradley L.

    2013-01-01

    Reading requires coordinated neural processing across a large number of brain regions. Studying relationships between reading-related regions informs the specificity of information processing performed in each region. Here, regions of interest were defined from a meta-analysis of reading studies, including a developmental study. Relationships…

  4. Identifying Metabolically Active Chemicals Using a Consensus Quantitative Structure Activity Relationship Model for Estrogen Receptor Binding

    EPA Science Inventory

    Traditional toxicity testing provides insight into the mechanisms underlying toxicological responses but requires a high investment in a large number of resources. The new paradigm of testing approaches involves rapid screening studies able to evaluate thousands of chemicals acro...

  5. Comparison of Species Sensitivity Distributions Derived from Interspecies Correlation Models to Distributions used to Derive Water Quality Criteria

    EPA Science Inventory

    Species sensitivity distributions (SSD) require a large number of measured toxicity values to define a chemical’s toxicity to multiple species. This investigation comprehensively evaluated the accuracy of SSDs generated from toxicity values predicted from interspecies correlation...

  6. The developing one door licensing service system based on RESTful oriented services and MVC framework

    NASA Astrophysics Data System (ADS)

    Widiyanto, Sigit; Setyawan, Aris Budi; Tarigan, Avinanta; Sussanto, Herry

    2016-02-01

    The increase of the number of business impact on the increasing service requirements for companies and Small Medium Enterprises (SMEs) in submitting their license request. The service system that is needed must be able to accommodate a large number of documents, various institutions, and time limitations of applicant. In addition, it is also required distributed applications which is able to be integrated each other. Service oriented application fits perfectly developed along client-server application which has been developed by the Government to digitalize submitted data. RESTful architecture and MVC framework are embedded in developing application. As a result, the application proves its capability in solving security, transaction speed, and data accuracy issues.

  7. Solving the corner-turning problem for large interferometers

    NASA Astrophysics Data System (ADS)

    Lutomirski, Andrew; Tegmark, Max; Sanchez, Nevada J.; Stein, Leo C.; Urry, W. Lynn; Zaldarriaga, Matias

    2011-01-01

    The so-called corner-turning problem is a major bottleneck for radio telescopes with large numbers of antennas. The problem is essentially that of rapidly transposing a matrix that is too large to store on one single device; in radio interferometry, it occurs because data from each antenna need to be routed to an array of processors each of which will handle a limited portion of the data (say, a frequency range) but requires input from each antenna. We present a low-cost solution allowing the correlator to transpose its data in real time, without contending for bandwidth, via a butterfly network requiring neither additional RAM memory nor expensive general-purpose switching hardware. We discuss possible implementations of this using FPGA, CMOS, analog logic and optical technology, and conclude that the corner-turner cost can be small even for upcoming massive radio arrays.

  8. A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.

    PubMed

    De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc

    2010-09-01

    In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.

  9. Optimizing Cluster Heads for Energy Efficiency in Large-Scale Heterogeneous Wireless Sensor Networks

    DOE PAGES

    Gu, Yi; Wu, Qishi; Rao, Nageswara S. V.

    2010-01-01

    Many complex sensor network applications require deploying a large number of inexpensive and small sensors in a vast geographical region to achieve quality through quantity. Hierarchical clustering is generally considered as an efficient and scalable way to facilitate the management and operation of such large-scale networks and minimize the total energy consumption for prolonged lifetime. Judicious selection of cluster heads for data integration and communication is critical to the success of applications based on hierarchical sensor networks organized as layered clusters. We investigate the problem of selecting sensor nodes in a predeployed sensor network to be the cluster heads tomore » minimize the total energy needed for data gathering. We rigorously derive an analytical formula to optimize the number of cluster heads in sensor networks under uniform node distribution, and propose a Distance-based Crowdedness Clustering algorithm to determine the cluster heads in sensor networks under general node distribution. The results from an extensive set of experiments on a large number of simulated sensor networks illustrate the performance superiority of the proposed solution over the clustering schemes based on k -means algorithm.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neuffer, David; Snopok, Pavel; Alexahin, Yuri

    A neutrino factory or muon collider requires the capture and cooling of a large number of muons. Scenarios for capture, bunching, phase-energy rotation and initial cooling of μ’s produced from a proton source target have been developed, initially for neutrino factory scenarios. They require a drift section from the target, a bunching section and a Φ-δE rotation section leading into the cooling channel. Important concerns are rf limitations within the focusing magnetic fields and large losses in the transport. The currently preferred cooling channel design is an “HFOFO Snake” configuration that cools both μ + and μ - transversely andmore » longitudinally. Finally, the status of the design is presented and variations are discussed.« less

  11. Front End for a neutrino factory or muon collider

    NASA Astrophysics Data System (ADS)

    Neuffer, D.; Snopok, P.; Alexahin, Y.

    2017-11-01

    A neutrino factory or muon collider requires the capture and cooling of a large number of muons. Scenarios for capture, bunching, phase-energy rotation and initial cooling of μ 's produced from a proton source target have been developed, initially for neutrino factory scenarios. They require a drift section from the target, a bunching section and a varphi -δ E rotation section leading into the cooling channel. Important concerns are rf limitations within the focusing magnetic fields and large losses in the transport. The currently preferred cooling channel design is an "HFOFO Snake" configuration that cools both μ+ and μ- transversely and longitudinally. The status of the design is presented and variations are discussed.

  12. Design and Fabrication of Double-Focused Ultrasound Transducers to Achieve Tight Focusing.

    PubMed

    Jang, Jihun; Chang, Jin Ho

    2016-08-06

    Beauty treatment for skin requires a high-intensity focused ultrasound (HIFU) transducer to generate coagulative necrosis in a small focal volume (e.g., 1 mm³) placed at a shallow depth (3-4.5 mm from the skin surface). For this, it is desirable to make the F-number as small as possible under the largest possible aperture in order to generate ultrasound energy high enough to induce tissue coagulation in such a small focal volume. However, satisfying both conditions at the same time is demanding. To meet the requirements, this paper, therefore, proposes a double-focusing technique, in which the aperture of an ultrasound transducer is spherically shaped for initial focusing and an acoustic lens is used to finally focus ultrasound on a target depth of treatment; it is possible to achieve the F-number of unity or less while keeping the aperture of a transducer as large as possible. In accordance with the proposed method, we designed and fabricated a 7-MHz double-focused ultrasound transducer. The experimental results demonstrated that the fabricated double-focused transducer had a focal length of 10.2 mm reduced from an initial focal length of 15.2 mm and, thus, the F-number changed from 1.52 to 1.02. Based on the results, we concluded that the proposed double-focusing method is suitable to decrease F-number while maintaining a large aperture size.

  13. Stabilizing canonical-ensemble calculations in the auxiliary-field Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Gilbreth, C. N.; Alhassid, Y.

    2015-03-01

    Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.

  14. Assessment of fish assemblages and minimum sampling effort required to determine botic integrity of large rivers in southern Idaho, 2002

    USGS Publications Warehouse

    Maret, Terry R.; Ott, D.S.

    2004-01-01

    width was determined to be sufficient for collecting an adequate number of fish to estimate species richness and evaluate biotic integrity. At most sites, about 250 fish were needed to effectively represent 95 percent of the species present. Fifty-three percent of the sites assessed, using an IBI developed specifically for large Idaho rivers, received scores of less than 50, indicating poor biotic integrity.

  15. Developing eThread pipeline using SAGA-pilot abstraction for large-scale structural bioinformatics.

    PubMed

    Ragothaman, Anjani; Boddu, Sairam Chowdary; Kim, Nayong; Feinstein, Wei; Brylinski, Michal; Jha, Shantenu; Kim, Joohyun

    2014-01-01

    While most of computational annotation approaches are sequence-based, threading methods are becoming increasingly attractive because of predicted structural information that could uncover the underlying function. However, threading tools are generally compute-intensive and the number of protein sequences from even small genomes such as prokaryotes is large typically containing many thousands, prohibiting their application as a genome-wide structural systems biology tool. To leverage its utility, we have developed a pipeline for eThread--a meta-threading protein structure modeling tool, that can use computational resources efficiently and effectively. We employ a pilot-based approach that supports seamless data and task-level parallelism and manages large variation in workload and computational requirements. Our scalable pipeline is deployed on Amazon EC2 and can efficiently select resources based upon task requirements. We present runtime analysis to characterize computational complexity of eThread and EC2 infrastructure. Based on results, we suggest a pathway to an optimized solution with respect to metrics such as time-to-solution or cost-to-solution. Our eThread pipeline can scale to support a large number of sequences and is expected to be a viable solution for genome-scale structural bioinformatics and structure-based annotation, particularly, amenable for small genomes such as prokaryotes. The developed pipeline is easily extensible to other types of distributed cyberinfrastructure.

  16. Developing eThread Pipeline Using SAGA-Pilot Abstraction for Large-Scale Structural Bioinformatics

    PubMed Central

    Ragothaman, Anjani; Feinstein, Wei; Jha, Shantenu; Kim, Joohyun

    2014-01-01

    While most of computational annotation approaches are sequence-based, threading methods are becoming increasingly attractive because of predicted structural information that could uncover the underlying function. However, threading tools are generally compute-intensive and the number of protein sequences from even small genomes such as prokaryotes is large typically containing many thousands, prohibiting their application as a genome-wide structural systems biology tool. To leverage its utility, we have developed a pipeline for eThread—a meta-threading protein structure modeling tool, that can use computational resources efficiently and effectively. We employ a pilot-based approach that supports seamless data and task-level parallelism and manages large variation in workload and computational requirements. Our scalable pipeline is deployed on Amazon EC2 and can efficiently select resources based upon task requirements. We present runtime analysis to characterize computational complexity of eThread and EC2 infrastructure. Based on results, we suggest a pathway to an optimized solution with respect to metrics such as time-to-solution or cost-to-solution. Our eThread pipeline can scale to support a large number of sequences and is expected to be a viable solution for genome-scale structural bioinformatics and structure-based annotation, particularly, amenable for small genomes such as prokaryotes. The developed pipeline is easily extensible to other types of distributed cyberinfrastructure. PMID:24995285

  17. Novel 3D Compression Methods for Geometry, Connectivity and Texture

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2016-06-01

    A large number of applications in medical visualization, games, engineering design, entertainment, heritage, e-commerce and so on require the transmission of 3D models over the Internet or over local networks. 3D data compression is an important requirement for fast data storage, access and transmission within bandwidth limitations. The Wavefront OBJ (object) file format is commonly used to share models due to its clear simple design. Normally each OBJ file contains a large amount of data (e.g. vertices and triangulated faces, normals, texture coordinates and other parameters) describing the mesh surface. In this paper we introduce a new method to compress geometry, connectivity and texture coordinates by a novel Geometry Minimization Algorithm (GM-Algorithm) in connection with arithmetic coding. First, each vertex ( x, y, z) coordinates are encoded to a single value by the GM-Algorithm. Second, triangle faces are encoded by computing the differences between two adjacent vertex locations, which are compressed by arithmetic coding together with texture coordinates. We demonstrate the method on large data sets achieving compression ratios between 87 and 99 % without reduction in the number of reconstructed vertices and triangle faces. The decompression step is based on a Parallel Fast Matching Search Algorithm (Parallel-FMS) to recover the structure of the 3D mesh. A comparative analysis of compression ratios is provided with a number of commonly used 3D file formats such as VRML, OpenCTM and STL highlighting the performance and effectiveness of the proposed method.

  18. Access Control Management for SCADA Systems

    NASA Astrophysics Data System (ADS)

    Hong, Seng-Phil; Ahn, Gail-Joon; Xu, Wenjuan

    The information technology revolution has transformed all aspects of our society including critical infrastructures and led a significant shift from their old and disparate business models based on proprietary and legacy environments to more open and consolidated ones. Supervisory Control and Data Acquisition (SCADA) systems have been widely used not only for industrial processes but also for some experimental facilities. Due to the nature of open environments, managing SCADA systems should meet various security requirements since system administrators need to deal with a large number of entities and functions involved in critical infrastructures. In this paper, we identify necessary access control requirements in SCADA systems and articulate access control policies for the simulated SCADA systems. We also attempt to analyze and realize those requirements and policies in the context of role-based access control that is suitable for simplifying administrative tasks in large scale enterprises.

  19. Doubly robust matching estimators for high dimensional confounding adjustment.

    PubMed

    Antonelli, Joseph; Cefalu, Matthew; Palmer, Nathan; Agniel, Denis

    2018-05-11

    Valid estimation of treatment effects from observational data requires proper control of confounding. If the number of covariates is large relative to the number of observations, then controlling for all available covariates is infeasible. In cases where a sparsity condition holds, variable selection or penalization can reduce the dimension of the covariate space in a manner that allows for valid estimation of treatment effects. In this article, we propose matching on both the estimated propensity score and the estimated prognostic scores when the number of covariates is large relative to the number of observations. We derive asymptotic results for the matching estimator and show that it is doubly robust in the sense that only one of the two score models need be correct to obtain a consistent estimator. We show via simulation its effectiveness in controlling for confounding and highlight its potential to address nonlinear confounding. Finally, we apply the proposed procedure to analyze the effect of gender on prescription opioid use using insurance claims data. © 2018, The International Biometric Society.

  20. Numerical and analytical approaches to an advection-diffusion problem at small Reynolds number and large Péclet number

    NASA Astrophysics Data System (ADS)

    Fuller, Nathaniel J.; Licata, Nicholas A.

    2018-05-01

    Obtaining a detailed understanding of the physical interactions between a cell and its environment often requires information about the flow of fluid surrounding the cell. Cells must be able to effectively absorb and discard material in order to survive. Strategies for nutrient acquisition and toxin disposal, which have been evolutionarily selected for their efficacy, should reflect knowledge of the physics underlying this mass transport problem. Motivated by these considerations, in this paper we discuss the results from an undergraduate research project on the advection-diffusion equation at small Reynolds number and large Péclet number. In particular, we consider the problem of mass transport for a Stokesian spherical swimmer. We approach the problem numerically and analytically through a rescaling of the concentration boundary layer. A biophysically motivated first-passage problem for the absorption of material by the swimming cell demonstrates quantitative agreement between the numerical and analytical approaches. We conclude by discussing the connections between our results and the design of smart toxin disposal systems.

  1. Weighted Iterative Bayesian Compressive Sensing (WIBCS) for High Dimensional Polynomial Surrogate Construction

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.

    2016-12-01

    Surrogate construction has become a routine procedure when facing computationally intensive studies requiring multiple evaluations of complex models. In particular, surrogate models, otherwise called emulators or response surfaces, replace complex models in uncertainty quantification (UQ) studies, including uncertainty propagation (forward UQ) and parameter estimation (inverse UQ). Further, surrogates based on Polynomial Chaos (PC) expansions are especially convenient for forward UQ and global sensitivity analysis, also known as variance-based decomposition. However, the PC surrogate construction strongly suffers from the curse of dimensionality. With a large number of input parameters, the number of model simulations required for accurate surrogate construction is prohibitively large. Relatedly, non-adaptive PC expansions typically include infeasibly large number of basis terms far exceeding the number of available model evaluations. We develop Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth and PC surrogate construction leading to a sparse, high-dimensional PC surrogate with a very few model evaluations. The surrogate is then readily employed for global sensitivity analysis leading to further dimensionality reduction. Besides numerical tests, we demonstrate the construction on the example of Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  2. Optimized distributed systems achieve significant performance improvement on sorted merging of massive VCF files.

    PubMed

    Sun, Xiaobo; Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng; Qin, Zhaohui S

    2018-06-01

    Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)-based high-performance computing (HPC) implementation, and the popular VCFTools. Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems.

  3. Experimental study of a constrained vapor bubble fin heat exchanger in the absence of external natural convection.

    PubMed

    Basu, Sumita; Plawsky, Joel L; Wayner, Peter C

    2004-11-01

    In preparation for a microgravity flight experiment on the International Space Station, a constrained vapor bubble fin heat exchanger (CVB) was operated both in a vacuum chamber and in air on Earth to evaluate the effect of the absence of external natural convection. The long-term objective is a general study of a high heat flux, low capillary pressure system with small viscous effects due to the relatively large 3 x 3 x 40 mm dimensions. The current CVB can be viewed as a large-scale version of a micro heat pipe with a large Bond number in the Earth environment but a small Bond number in microgravity. The walls of the CVB are quartz, to allow for image analysis of naturally occurring interference fringes that give the pressure field for liquid flow. The research is synergistic in that the study requires a microgravity environment to obtain a low Bond number and the space program needs thermal control systems, like the CVB, with a large characteristic dimension. In the absence of natural convection, operation of the CVB may be dominated by external radiative losses from its quartz surface. Therefore, an understanding of radiation from the quartz cell is required. All radiative exchange with the surroundings occurs from the outer surface of the CVB when the temperature range renders the quartz walls of the CVB optically thick (lambda > 4 microns). However, for electromagnetic radiation where lambda < 2 microns, the walls are transparent. Experimental results obtained for a cell charged with pentane are compared with those obtained for a dry cell. A numerical model was developed that successfully simulated the behavior and performance of the device observed experimentally.

  4. Accounting for Parameter Uncertainty in Complex Atmospheric Models, With an Application to Greenhouse Gas Emissions Evaluation

    NASA Astrophysics Data System (ADS)

    Swallow, B.; Rigby, M. L.; Rougier, J.; Manning, A.; Thomson, D.; Webster, H. N.; Lunt, M. F.; O'Doherty, S.

    2016-12-01

    In order to understand underlying processes governing environmental and physical phenomena, a complex mathematical model is usually required. However, there is an inherent uncertainty related to the parameterisation of unresolved processes in these simulators. Here, we focus on the specific problem of accounting for uncertainty in parameter values in an atmospheric chemical transport model. Systematic errors introduced by failing to account for these uncertainties have the potential to have a large effect on resulting estimates in unknown quantities of interest. One approach that is being increasingly used to address this issue is known as emulation, in which a large number of forward runs of the simulator are carried out, in order to approximate the response of the output to changes in parameters. However, due to the complexity of some models, it is often unfeasible to run large numbers of training runs that is usually required for full statistical emulators of the environmental processes. We therefore present a simplified model reduction method for approximating uncertainties in complex environmental simulators without the need for very large numbers of training runs. We illustrate the method through an application to the Met Office's atmospheric transport model NAME. We show how our parameter estimation framework can be incorporated into a hierarchical Bayesian inversion, and demonstrate the impact on estimates of UK methane emissions, using atmospheric mole fraction data. We conclude that accounting for uncertainties in the parameterisation of complex atmospheric models is vital if systematic errors are to be minimized and all relevant uncertainties accounted for. We also note that investigations of this nature can prove extremely useful in highlighting deficiencies in the simulator that might otherwise be missed.

  5. Optimized distributed systems achieve significant performance improvement on sorted merging of massive VCF files

    PubMed Central

    Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng

    2018-01-01

    Abstract Background Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. Findings In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)–based high-performance computing (HPC) implementation, and the popular VCFTools. Conclusions Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems. PMID:29762754

  6. Design of a practical model-observer-based image quality assessment method for x-ray computed tomography imaging systems

    PubMed Central

    Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.

    2016-01-01

    Abstract. The use of a channelization mechanism on model observers not only makes mimicking human visual behavior possible, but also reduces the amount of image data needed to estimate the model observer parameters. The channelized Hotelling observer (CHO) and channelized scanning linear observer (CSLO) have recently been used to assess CT image quality for detection tasks and combined detection/estimation tasks, respectively. Although the use of channels substantially reduces the amount of data required to compute image quality, the number of scans required for CT imaging is still not practical for routine use. It is our desire to further reduce the number of scans required to make CHO or CSLO an image quality tool for routine and frequent system validations and evaluations. This work explores different data-reduction schemes and designs an approach that requires only a few CT scans. Three different kinds of approaches are included in this study: a conventional CHO/CSLO technique with a large sample size, a conventional CHO/CSLO technique with fewer samples, and an approach that we will show requires fewer samples to mimic conventional performance with a large sample size. The mean value and standard deviation of areas under ROC/EROC curve were estimated using the well-validated shuffle approach. The results indicate that an 80% data reduction can be achieved without loss of accuracy. This substantial data reduction is a step toward a practical tool for routine-task-based QA/QC CT system assessment. PMID:27493982

  7. Improve California trap programs for detection of fruit flies

    USDA-ARS?s Scientific Manuscript database

    There are >160,000 federal and state fruit fly detection traps deployed in southern and western U.S. States and Puerto Rico. In California alone, >100,000 traps are deployed and maintained just for exotic fruit flies detection. Fruit fly detection and eradication requires deployment of large numbers...

  8. Editorial: The Advent of a Molecular Genetics of General Intelligence.

    ERIC Educational Resources Information Center

    Weiss, Volkmar

    1995-01-01

    Raw IQ scores do not demonstrate the bell curve created by normalized scores, even the bell-shaped distribution does not require large numbers of underlying genes. Family data support a major gene locus of IQ. The correlation between glutathione peroxidase and IQ should be investigated through molecular genetics. (SLD)

  9. Evaluating Comparative Judgment as an Approach to Essay Scoring

    ERIC Educational Resources Information Center

    Steedle, Jeffrey T.; Ferrara, Steve

    2016-01-01

    As an alternative to rubric scoring, comparative judgment generates essay scores by aggregating decisions about the relative quality of the essays. Comparative judgment eliminates certain scorer biases and potentially reduces training requirements, thereby allowing a large number of judges, including teachers, to participate in essay evaluation.…

  10. Information Model for Reusability in Clinical Trial Documentation

    ERIC Educational Resources Information Center

    Bahl, Bhanu

    2013-01-01

    In clinical research, New Drug Application (NDA) to health agencies requires generation of a large number of documents throughout the clinical development life cycle, many of which are also submitted to public databases and external partners. Current processes to assemble the information, author, review and approve the clinical research documents,…

  11. Unified Approximations: A New Approach for Monoprotic Weak Acid-Base Equilibria

    ERIC Educational Resources Information Center

    Pardue, Harry; Odeh, Ihab N.; Tesfai, Teweldemedhin M.

    2004-01-01

    The unified approximations reduce the conceptual complexity by combining solutions for a relatively large number of different situations into just two similar sets of processes. Processes used to solve problems by either the unified or classical approximations require similar degrees of understanding of the underlying chemical processes.

  12. Evaluation of PLS, LS-SVM, and LWR for quantitative spectroscopic analysis of soils

    USDA-ARS?s Scientific Manuscript database

    Soil testing requires the analysis of large numbers of samples in laboratory that are often time consuming and expensive. Mid-infrared spectroscopy (mid-IR) and near-infrared spectroscopy (NIRS) are fast, non-destructive, and inexpensive analytical methods that have been used for soil analysis, in l...

  13. Computational embryology as an integrative platform for predictive DART (45th Conf of Europ Teratology Society)

    EPA Science Inventory

    Chemical regulation is challenged by the large number of chemicals requiring assessment for potential human health and environmental impacts. For example, the USEPA lists more than 85,000 chemicals on its inventory of substances that fall under the Toxic Substances Control Act (T...

  14. SURFACE WATER FLOW IN LANDSCAPE MODELS: 1. EVERGLADES CASE STUDY. (R824766)

    EPA Science Inventory

    Many landscape models require extensive computational effort using a large array of grid cells that represent the landscape. The number of spatial cells may be in the thousands and millions, while the ecological component run in each of the cells to account for landscape dynamics...

  15. Applying Adverse Outcome Pathways (AOPs) to support Integrated Approaches to Testing and Assessment (IATA workshop report)

    EPA Science Inventory

    Chemical regulation is challenged by the large number of chemicals requiring assessment for potential human health and environmental impacts. Current approaches are too resource intensive in terms of time, money and animal use to evaluate all chemicals under development or alread...

  16. Behavioral Problems in the Classroom and Underlying Language Difficulties

    ERIC Educational Resources Information Center

    Tommerdahl, Jodi; Semingson, Peggy

    2013-01-01

    Dealing with the behavioral problems of students is one of many dimensions of most educators' and schools' requirements. While research has repeatedly shown that a large number of children with behavior problems have underlying, unrecognized language difficulties, few schools have implemented programs where children with problem behavior are…

  17. Maxi CAI with a Micro.

    ERIC Educational Resources Information Center

    Gerhold, George; And Others

    This paper describes an effective microprocessor-based CAI system which has been repeatedly tested by a large number of students and edited accordingly. Tasks not suitable for microprocessor based systems (authoring, testing, and debugging) were handled on larger multi-terminal systems. This approach requires that the CAI language used on the…

  18. A Coherent VLSI Design Environment.

    DTIC Science & Technology

    1985-09-30

    deviation were only a few percent. If the number of paths with a delay close to 9ns were large, even more statistical accuracy would be required to...Zippel, 1Capsules, IGPLAN Bulletn, vol. 18, no. 6, waveforms. In the bottom window, the currents into the pp. 164-169, 1983. depletion transitors are

  19. Predicting Contextual Informativeness for Vocabulary Learning

    ERIC Educational Resources Information Center

    Kapelner, Adam; Soterwood, Jeanine; Nessaiver, Shalev; Adlof, Suzanne

    2018-01-01

    Vocabulary knowledge is essential to educational progress. High quality vocabulary instruction requires supportive contextual examples to teach word meaning and proper usage. Identifying such contexts by hand for a large number of words can be difficult. In this work, we take a statistical learning approach to engineer a system that predicts…

  20. Improving crop condition monitoring at field scale by using optimal Landsat and MODIS images

    USDA-ARS?s Scientific Manuscript database

    Satellite remote sensing data at coarse resolution (kilometers) have been widely used in monitoring crop condition for decades. However, crop condition monitoring at field scale requires high resolution data in both time and space. Although a large number of remote sensing instruments with different...

  1. COSTS AND ISSUES RELATED TO REMEDIATION OF PETROLEUM-CONTAMINATED SITES (NEW ORLEANS, LA)

    EPA Science Inventory

    The remediation costs required at sites contaminated with petroleum-derived compounds remains a relevant issue because of the large number of existing underground storage tanks the United States and the presence of benzene, MTBE, and TBA in some drinking water supplies. Cost inf...

  2. Anxiety in Language Testing: The APTIS Case

    ERIC Educational Resources Information Center

    Valencia Robles, Jeannette de Fátima

    2017-01-01

    The requirement of holding a diploma which certifies proficiency level in a foreign language is constantly increasing in academic and working environments. Computer-based testing has become a prevailing tendency for these and other educational purposes. Each year large numbers of students take online language tests everywhere in the world. In…

  3. Optimal control of large space structures via generalized inverse matrix

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Fang, Xiaowen

    1987-01-01

    Independent Modal Space Control (IMSC) is a control scheme that decouples the space structure into n independent second-order subsystems according to n controlled modes and controls each mode independently. It is well-known that the IMSC eliminates control and observation spillover caused when the conventional coupled modal control scheme is employed. The independent control of each mode requires that the number of actuators be equal to the number of modelled modes, which is very high for a faithful modeling of large space structures. A control scheme is proposed that allows one to use a reduced number of actuators to control all modeled modes suboptimally. In particular, the method of generalized inverse matrices is employed to implement the actuators such that the eigenvalues of the closed-loop system are as closed as possible to those specified by the optimal IMSC. Computer simulation of the proposed control scheme on a simply supported beam is given.

  4. HIGH-EFFICIENCY AUTONOMOUS LASER ADAPTIVE OPTICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baranec, Christoph; Riddle, Reed; Tendulkar, Shriharsh

    2014-07-20

    As new large-scale astronomical surveys greatly increase the number of objects targeted and discoveries made, the requirement for efficient follow-up observations is crucial. Adaptive optics imaging, which compensates for the image-blurring effects of Earth's turbulent atmosphere, is essential for these surveys, but the scarcity, complexity and high demand of current systems limit their availability for following up large numbers of targets. To address this need, we have engineered and implemented Robo-AO, a fully autonomous laser adaptive optics and imaging system that routinely images over 200 objects per night with an acuity 10 times sharper at visible wavelengths than typically possible frommore » the ground. By greatly improving the angular resolution, sensitivity, and efficiency of 1-3 m class telescopes, we have eliminated a major obstacle in the follow-up of the discoveries from current and future large astronomical surveys.« less

  5. A stochastic perturbation method to generate inflow turbulence in large-eddy simulation models: Application to neutrally stratified atmospheric boundary layers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muñoz-Esparza, D.; Kosović, B.; Beeck, J. van

    2015-03-15

    Despite the variety of existing methods, efficient generation of turbulent inflow conditions for large-eddy simulation (LES) models remains a challenging and active research area. Herein, we extend our previous research on the cell perturbation method, which uses a novel stochastic approach based upon finite amplitude perturbations of the potential temperature field applied within a region near the inflow boundaries of the LES domain [Muñoz-Esparza et al., “Bridging the transition from mesoscale to microscale turbulence in numerical weather prediction models,” Boundary-Layer Meteorol., 153, 409–440 (2014)]. The objective was twofold: (i) to identify the governing parameters of the method and their optimummore » values and (ii) to generalize the results over a broad range of atmospheric large-scale forcing conditions, U{sub g} = 5 − 25 m s{sup −1}, where U{sub g} is the geostrophic wind. We identified the perturbation Eckert number, Ec=U{sub g}{sup 2}/ρc{sub p}θ{sup ~}{sub pm}, to be the parameter governing the flow transition to turbulence in neutrally stratified boundary layers. Here, θ{sup ~}{sub pm} is the maximum perturbation amplitude applied, c{sub p} is the specific heat capacity at constant pressure, and ρ is the density. The optimal Eckert number was found for nonlinear perturbations allowed by Ec ≈ 0.16, which instigate formation of hairpin-like vortices that most rapidly transition to a developed turbulent state. Larger Ec numbers (linear small-amplitude perturbations) result in streaky structures requiring larger fetches to reach the quasi-equilibrium solution, while smaller Ec numbers lead to buoyancy dominated perturbations exhibiting difficulties for hairpin-like vortices to emerge. Cell perturbations with wavelengths within the inertial range of three-dimensional turbulence achieved identical quasi-equilibrium values of resolved turbulent kinetic energy, q, and Reynolds-shear stress, . In contrast, large-scale perturbations acting at the production range exhibited reduced levels of , due to the formation of coherent streamwise structures, while q was maintained, requiring larger fetches for the turbulent solution to stabilize. Additionally, the cell perturbation method was compared to a synthetic turbulence generator. The proposed stochastic approach provided at least the same efficiency in developing realistic turbulence, while accelerating the formation of large-scales associated with production of turbulent kinetic energy. Also, it is computationally inexpensive and does not require any turbulent information.« less

  6. Antenna Electronics Concept for the Next-Generation Very Large Array

    NASA Astrophysics Data System (ADS)

    Beasley, Anthony J.; Jackson, Jim; Selina, Robert

    2017-01-01

    The National Radio Astronomy Observatory (NRAO), in collaboration with its international partners, completed two major projects over the past decade: the sensitivity upgrade for the Karl Jansky Very Large Array (VLA) and the construction of the Atacama Large Millimeter/Sub-Millimeter Array (ALMA). The NRAO is now considering the scientific potential and technical feasibility of a next-generation VLA (ngVLA) with an emphasis on thermal imaging at milli-arcsecond resolution. The preliminary goals for the ngVLA are to increase both the system sensitivity and angular resolution of the VLA tenfold and to cover a frequency range of 1.2-116 GHz.A number of key technical challenges have been identified for the project. These include cost-effective antenna manufacturing (in the hundreds), suitable wide-band feed and receiver designs, broad-band data transmission, and large-N correlators. Minimizing the overall operations cost is also a fundamental design requirement.The designs of the antenna electronics, reference distribution system, and data transmission system are anticipated to be major construction and operations cost drivers for the facility. The electronics must achieve a high level of performance, while maintaining low operation and maintenance costs and a high level of reliability. Additionally, due to the uncertainty in the feasibility of wideband receivers, advancements in digitizer technology, and budget constraints, the hardware system architecture should be scalable to the number of receiver bands and the speed and resolution of available digitizers.Here, we present the projected performance requirements of the ngVLA, a proposed block diagram for the instrument’s electronics systems, parameter tradeoffs within the system specifications, and areas of technical risk where technical advances may be required for successful production and installation.

  7. Scalable subsurface inverse modeling of huge data sets with an application to tracer concentration breakthrough data from magnetic resonance imaging

    DOE PAGES

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.; ...

    2016-06-09

    When characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydro-geophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with “big data” processing and numerous large-scale numerical simulations. To tackle such difficulties, the Principal Component Geostatistical Approach (PCGA) has been proposed as a “Jacobian-free” inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed inmore » the traditional inversion methods. PCGA can be conveniently linked to any multi-physics simulation software with independent parallel executions. In our paper, we extend PCGA to handle a large number of measurements (e.g. 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data was compressed by the zero-th temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Moreover, only about 2,000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method. This article is protected by copyright. All rights reserved.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.

    When characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydro-geophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with “big data” processing and numerous large-scale numerical simulations. To tackle such difficulties, the Principal Component Geostatistical Approach (PCGA) has been proposed as a “Jacobian-free” inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed inmore » the traditional inversion methods. PCGA can be conveniently linked to any multi-physics simulation software with independent parallel executions. In our paper, we extend PCGA to handle a large number of measurements (e.g. 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data was compressed by the zero-th temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Moreover, only about 2,000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method. This article is protected by copyright. All rights reserved.« less

  9. Parallel group independent component analysis for massive fMRI data sets.

    PubMed

    Chen, Shaojie; Huang, Lei; Qiu, Huitong; Nebel, Mary Beth; Mostofsky, Stewart H; Pekar, James J; Lindquist, Martin A; Eloyan, Ani; Caffo, Brian S

    2017-01-01

    Independent component analysis (ICA) is widely used in the field of functional neuroimaging to decompose data into spatio-temporal patterns of co-activation. In particular, ICA has found wide usage in the analysis of resting state fMRI (rs-fMRI) data. Recently, a number of large-scale data sets have become publicly available that consist of rs-fMRI scans from thousands of subjects. As a result, efficient ICA algorithms that scale well to the increased number of subjects are required. To address this problem, we propose a two-stage likelihood-based algorithm for performing group ICA, which we denote Parallel Group Independent Component Analysis (PGICA). By utilizing the sequential nature of the algorithm and parallel computing techniques, we are able to efficiently analyze data sets from large numbers of subjects. We illustrate the efficacy of PGICA, which has been implemented in R and is freely available through the Comprehensive R Archive Network, through simulation studies and application to rs-fMRI data from two large multi-subject data sets, consisting of 301 and 779 subjects respectively.

  10. Accelerating root system phenotyping of seedlings through a computer-assisted processing pipeline.

    PubMed

    Dupuy, Lionel X; Wright, Gladys; Thompson, Jacqueline A; Taylor, Anna; Dekeyser, Sebastien; White, Christopher P; Thomas, William T B; Nightingale, Mark; Hammond, John P; Graham, Neil S; Thomas, Catherine L; Broadley, Martin R; White, Philip J

    2017-01-01

    There are numerous systems and techniques to measure the growth of plant roots. However, phenotyping large numbers of plant roots for breeding and genetic analyses remains challenging. One major difficulty is to achieve high throughput and resolution at a reasonable cost per plant sample. Here we describe a cost-effective root phenotyping pipeline, on which we perform time and accuracy benchmarking to identify bottlenecks in such pipelines and strategies for their acceleration. Our root phenotyping pipeline was assembled with custom software and low cost material and equipment. Results show that sample preparation and handling of samples during screening are the most time consuming task in root phenotyping. Algorithms can be used to speed up the extraction of root traits from image data, but when applied to large numbers of images, there is a trade-off between time of processing the data and errors contained in the database. Scaling-up root phenotyping to large numbers of genotypes will require not only automation of sample preparation and sample handling, but also efficient algorithms for error detection for more reliable replacement of manual interventions.

  11. A continuum theory for multicomponent chromatography modeling.

    PubMed

    Pfister, David; Morbidelli, Massimo; Nicoud, Roger-Marc

    2016-05-13

    A continuum theory is proposed for modeling multicomponent chromatographic systems under linear conditions. The model is based on the description of complex mixtures, possibly involving tens or hundreds of solutes, by a continuum. The present approach is shown to be very efficient when dealing with a large number of similar components presenting close elution behaviors and whose individual analytical characterization is impossible. Moreover, approximating complex mixtures by continuous distributions of solutes reduces the required number of model parameters to the few ones specific to the characterization of the selected continuous distributions. Therefore, in the frame of the continuum theory, the simulation of large multicomponent systems gets simplified and the computational effectiveness of the chromatographic model is thus dramatically improved. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. CTRI – Clicking to greater transparency and accountability

    PubMed Central

    George, Bobby

    2012-01-01

    A clinical trial registry (CTR) is an official platform for registering a clinical trial (CT) with an objective of providing increased transparency and access to CTs to the public at large. Clinical Trials Registry - India (CTRI) is a free online public record system for registration of CTs being conducted in India. The vision of the CTRI is to ensure that every CT conducted in the region is prospectively registered with full disclosure of the trial data set items. With more number of CTs being conducted in the country, with a large number being global multicentre trials, it is binding on the industry/investigators/sponsor to comply with the requirements laid down. While there are pros and cons, there is enough scope for improvement of CTRI. PMID:23293758

  13. Search for dark matter and other new phenomena in events with an energetic jet and large missing transverse momentum using the ATLAS detector

    NASA Astrophysics Data System (ADS)

    Aaboud, M.; Aad, G.; Abbott, B.; Abdinov, O.; Abeloos, B.; Abidi, S. H.; AbouZeid, O. S.; Abraham, N. L.; Abramowicz, H.; Abreu, H.; Abreu, R.; Abulaiti, Y.; Acharya, B. S.; Adachi, S.; Adamczyk, L.; Adelman, J.; Adersberger, M.; Adye, T.; Affolder, A. A.; Afik, Y.; Agatonovic-Jovin, T.; Agheorghiesei, C.; Aguilar-Saavedra, J. A.; Ahlen, S. P.; Ahmadov, F.; Aielli, G.; Akatsuka, S.; Akerstedt, H.; Åkesson, T. P. A.; Akilli, E.; Akimov, A. V.; Alberghi, G. L.; Albert, J.; Albicocco, P.; Alconada Verzini, M. J.; Alderweireldt, S. C.; Aleksa, M.; Aleksandrov, I. N.; Alexa, C.; Alexander, G.; Alexopoulos, T.; Alhroob, M.; Ali, B.; Aliev, M.; Alimonti, G.; Alison, J.; Alkire, S. P.; Allbrooke, B. M. M.; Allen, B. W.; Allport, P. P.; Aloisio, A.; Alonso, A.; Alonso, F.; Alpigiani, C.; Alshehri, A. A.; Alstaty, M. I.; Alvarez Gonzalez, B.; Álvarez Piqueras, D.; Alviggi, M. G.; Amadio, B. T.; Amaral Coutinho, Y.; Amelung, C.; Amidei, D.; Amor Dos Santos, S. P.; Amoroso, S.; Amundsen, G.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, J. K.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Angelidakis, S.; Angelozzi, I.; Angerami, A.; Anisenkov, A. V.; Anjos, N.; Annovi, A.; Antel, C.; Antonelli, M.; Antonov, A.; Antrim, D. J.; Anulli, F.; Aoki, M.; Aperio Bella, L.; Arabidze, G.; Arai, Y.; Araque, J. P.; Araujo Ferraz, V.; Arce, A. T. H.; Ardell, R. E.; Arduh, F. A.; Arguin, J.-F.; Argyropoulos, S.; Arik, M.; Armbruster, A. J.; Armitage, L. J.; Arnaez, O.; Arnold, H.; Arratia, M.; Arslan, O.; Artamonov, A.; Artoni, G.; Artz, S.; Asai, S.; Asbah, N.; Ashkenazi, A.; Asquith, L.; Assamagan, K.; Astalos, R.; Atkinson, M.; Atlay, N. B.; Augsten, K.; Avolio, G.; Axen, B.; Ayoub, M. K.; Azuelos, G.; Baas, A. E.; Baca, M. J.; Bachacou, H.; Bachas, K.; Backes, M.; Bagnaia, P.; Bahmani, M.; Bahrasemani, H.; Baines, J. T.; Bajic, M.; Baker, O. K.; Bakker, P. J.; Baldin, E. M.; Balek, P.; Balli, F.; Balunas, W. K.; Banas, E.; Bandyopadhyay, A.; Banerjee, Sw.; Bannoura, A. A. E.; Barak, L.; Barberio, E. L.; Barberis, D.; Barbero, M.; Barillari, T.; Barisits, M.-S.; Barkeloo, J. T.; Barklow, T.; Barlow, N.; Barnes, S. L.; Barnett, B. M.; Barnett, R. M.; Barnovska-Blenessy, Z.; Baroncelli, A.; Barone, G.; Barr, A. J.; Barranco Navarro, L.; Barreiro, F.; Barreiro Guimarães da Costa, J.; Bartoldus, R.; Barton, A. E.; Bartos, P.; Basalaev, A.; Bassalat, A.; Bates, R. L.; Batista, S. J.; Batley, J. R.; Battaglia, M.; Bauce, M.; Bauer, F.; Bawa, H. S.; Beacham, J. B.; Beattie, M. D.; Beau, T.; Beauchemin, P. H.; Bechtle, P.; Beck, H. P.; Beck, H. C.; Becker, K.; Becker, M.; Becot, C.; Beddall, A. J.; Beddall, A.; Bednyakov, V. A.; Bedognetti, M.; Bee, C. P.; Beermann, T. A.; Begalli, M.; Begel, M.; Behr, J. K.; Bell, A. S.; Bella, G.; Bellagamba, L.; Bellerive, A.; Bellomo, M.; Belotskiy, K.; Beltramello, O.; Belyaev, N. L.; Benary, O.; Benchekroun, D.; Bender, M.; Benekos, N.; Benhammou, Y.; Benhar Noccioli, E.; Benitez, J.; Benjamin, D. P.; Benoit, M.; Bensinger, J. R.; Bentvelsen, S.; Beresford, L.; Beretta, M.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Bergsten, L. J.; Beringer, J.; Berlendis, S.; Bernard, N. R.; Bernardi, G.; Bernius, C.; Bernlochner, F. U.; Berry, T.; Berta, P.; Bertella, C.; Bertoli, G.; Bertram, I. A.; Bertsche, C.; Besjes, G. J.; Bessidskaia Bylund, O.; Bessner, M.; Besson, N.; Bethani, A.; Bethke, S.; Betti, A.; Bevan, A. J.; Beyer, J.; Bianchi, R. M.; Biebel, O.; Biedermann, D.; Bielski, R.; Bierwagen, K.; Biesuz, N. V.; Biglietti, M.; Billoud, T. R. V.; Bilokon, H.; Bindi, M.; Bingul, A.; Bini, C.; Biondi, S.; Bisanz, T.; Bittrich, C.; Bjergaard, D. M.; Black, J. E.; Black, K. M.; Blair, R. E.; Blazek, T.; Bloch, I.; Blocker, C.; Blue, A.; Blumenschein, U.; Blunier, S.; Bobbink, G. J.; Bobrovnikov, V. S.; Bocchetta, S. S.; Bocci, A.; Bock, C.; Boehler, M.; Boerner, D.; Bogavac, D.; Bogdanchikov, A. G.; Bohm, C.; Boisvert, V.; Bokan, P.; Bold, T.; Boldyrev, A. S.; Bolz, A. E.; Bomben, M.; Bona, M.; Boonekamp, M.; Borisov, A.; Borissov, G.; Bortfeldt, J.; Bortoletto, D.; Bortolotto, V.; Boscherini, D.; Bosman, M.; Bossio Sola, J. D.; Boudreau, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Boutle, S. K.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bozson, A. J.; Bracinik, J.; Brandt, A.; Brandt, G.; Brandt, O.; Braren, F.; Bratzler, U.; Brau, B.; Brau, J. E.; Breaden Madden, W. D.; Brendlinger, K.; Brennan, A. J.; Brenner, L.; Brenner, R.; Bressler, S.; Briglin, D. L.; Bristow, T. M.; Britton, D.; Britzger, D.; Brochu, F. M.; Brock, I.; Brock, R.; Brooijmans, G.; Brooks, T.; Brooks, W. K.; Brosamer, J.; Brost, E.; Broughton, J. H.; Bruckman de Renstrom, P. A.; Bruncko, D.; Bruni, A.; Bruni, G.; Bruni, L. S.; Bruno, S.; Brunt, BH; Bruschi, M.; Bruscino, N.; Bryant, P.; Bryngemark, L.; Buanes, T.; Buat, Q.; Buchholz, P.; Buckley, A. G.; Budagov, I. A.; Buehrer, F.; Bugge, M. K.; Bulekov, O.; Bullock, D.; Burch, T. J.; Burdin, S.; Burgard, C. D.; Burger, A. M.; Burghgrave, B.; Burka, K.; Burke, S.; Burmeister, I.; Burr, J. T. P.; Büscher, D.; Büscher, V.; Bussey, P.; Butler, J. M.; Buttar, C. M.; Butterworth, J. M.; Butti, P.; Buttinger, W.; Buzatu, A.; Buzykaev, A. R.; Changqiao, C.-Q.; Cabrera Urbán, S.; Caforio, D.; Cai, H.; Cairo, V. M.; Cakir, O.; Calace, N.; Calafiura, P.; Calandri, A.; Calderini, G.; Calfayan, P.; Callea, G.; Caloba, L. P.; Calvente Lopez, S.; Calvet, D.; Calvet, S.; Calvet, T. P.; Camacho Toro, R.; Camarda, S.; Camarri, P.; Cameron, D.; Caminal Armadans, R.; Camincher, C.; Campana, S.; Campanelli, M.; Camplani, A.; Campoverde, A.; Canale, V.; Cano Bret, M.; Cantero, J.; Cao, T.; Capeans Garrido, M. D. M.; Caprini, I.; Caprini, M.; Capua, M.; Carbone, R. M.; Cardarelli, R.; Cardillo, F.; Carli, I.; Carli, T.; Carlino, G.; Carlson, B. T.; Carminati, L.; Carney, R. M. D.; Caron, S.; Carquin, E.; Carrá, S.; Carrillo-Montoya, G. D.; Casadei, D.; Casado, M. P.; Casha, A. F.; Casolino, M.; Casper, D. W.; Castelijn, R.; Castillo Gimenez, V.; Castro, N. F.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Caudron, J.; Cavaliere, V.; Cavallaro, E.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Celebi, E.; Ceradini, F.; Cerda Alberich, L.; Cerqueira, A. S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cervelli, A.; Cetin, S. A.; Chafaq, A.; Chakraborty, D.; Chan, S. K.; Chan, W. S.; Chan, Y. L.; Chang, P.; Chapman, J. D.; Charlton, D. G.; Chau, C. C.; Chavez Barajas, C. A.; Che, S.; Cheatham, S.; Chegwidden, A.; Chekanov, S.; Chekulaev, S. V.; Chelkov, G. A.; Chelstowska, M. A.; Chen, C.; Chen, C.; Chen, H.; Chen, J.; Chen, S.; Chen, S.; Chen, X.; Chen, Y.; Cheng, H. C.; Cheng, H. J.; Cheplakov, A.; Cheremushkina, E.; Cherkaoui El Moursli, R.; Cheu, E.; Cheung, K.; Chevalier, L.; Chiarella, V.; Chiarelli, G.; Chiodini, G.; Chisholm, A. S.; Chitan, A.; Chiu, Y. H.; Chizhov, M. V.; Choi, K.; Chomont, A. R.; Chouridou, S.; Chow, Y. S.; Christodoulou, V.; Chu, M. C.; Chudoba, J.; Chuinard, A. J.; Chwastowski, J. J.; Chytka, L.; Ciftci, A. K.; Cinca, D.; Cindro, V.; Cioara, I. A.; Ciocio, A.; Cirotto, F.; Citron, Z. H.; Citterio, M.; Ciubancan, M.; Clark, A.; Clark, B. L.; Clark, M. R.; Clark, P. J.; Clarke, R. N.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Colasurdo, L.; Cole, B.; Colijn, A. P.; Collot, J.; Colombo, T.; Conde Muiño, P.; Coniavitis, E.; Connell, S. H.; Connelly, I. A.; Constantinescu, S.; Conti, G.; Conventi, F.; Cooke, M.; Cooper-Sarkar, A. M.; Cormier, F.; Cormier, K. J. R.; Corradi, M.; Corriveau, F.; Cortes-Gonzalez, A.; Costa, G.; Costa, M. J.; Costanzo, D.; Cottin, G.; Cowan, G.; Cox, B. E.; Cranmer, K.; Crawley, S. J.; Creager, R. A.; Cree, G.; Crépé-Renaudin, S.; Crescioli, F.; Cribbs, W. A.; Cristinziani, M.; Croft, V.; Crosetti, G.; Cueto, A.; Cuhadar Donszelmann, T.; Cukierman, A. R.; Cummings, J.; Curatolo, M.; Cúth, J.; Czekierda, S.; Czodrowski, P.; D'amen, G.; D'Auria, S.; D'eramo, L.; D'Onofrio, M.; Da Cunha Sargedas De Sousa, M. J.; Da Via, C.; Dabrowski, W.; Dado, T.; Dai, T.; Dale, O.; Dallaire, F.; Dallapiccola, C.; Dam, M.; Dandoy, J. R.; Daneri, M. F.; Dang, N. P.; Daniells, A. C.; Dann, N. S.; Danninger, M.; Dano Hoffmann, M.; Dao, V.; Darbo, G.; Darmora, S.; Dassoulas, J.; Dattagupta, A.; Daubney, T.; Davey, W.; David, C.; Davidek, T.; Davis, D. R.; Davison, P.; Dawe, E.; Dawson, I.; De, K.; de Asmundis, R.; De Benedetti, A.; De Castro, S.; De Cecco, S.; De Groot, N.; de Jong, P.; De la Torre, H.; De Lorenzi, F.; De Maria, A.; De Pedis, D.; De Salvo, A.; De Sanctis, U.; De Santo, A.; De Vasconcelos Corga, K.; De Vivie De Regie, J. B.; Debbe, R.; Debenedetti, C.; Dedovich, D. V.; Dehghanian, N.; Deigaard, I.; Del Gaudio, M.; Del Peso, J.; Delgove, D.; Deliot, F.; Delitzsch, C. M.; Dell'Acqua, A.; Dell'Asta, L.; Dell'Orso, M.; Della Pietra, M.; della Volpe, D.; Delmastro, M.; Delporte, C.; Delsart, P. A.; DeMarco, D. A.; Demers, S.; Demichev, M.; Demilly, A.; Denisov, S. P.; Denysiuk, D.; Derendarz, D.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K.; Deterre, C.; Dette, K.; Devesa, M. R.; Deviveiros, P. O.; Dewhurst, A.; Dhaliwal, S.; Di Bello, F. A.; Di Ciaccio, A.; Di Ciaccio, L.; Di Clemente, W. K.; Di Donato, C.; Di Girolamo, A.; Di Girolamo, B.; Di Micco, B.; Di Nardo, R.; Di Petrillo, K. F.; Di Simone, A.; Di Sipio, R.; Di Valentino, D.; Diaconu, C.; Diamond, M.; Dias, F. A.; Diaz, M. A.; Dickinson, J.; Diehl, E. B.; Dietrich, J.; Díez Cornell, S.; Dimitrievska, A.; Dingfelder, J.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djobava, T.; Djuvsland, J. I.; do Vale, M. A. B.; Dobos, D.; Dobre, M.; Dodsworth, D.; Doglioni, C.; Dolejsi, J.; Dolezal, Z.; Donadelli, M.; Donati, S.; Dondero, P.; Donini, J.; Dopke, J.; Doria, A.; Dova, M. T.; Doyle, A. T.; Drechsler, E.; Dris, M.; Du, Y.; Duarte-Campderros, J.; Dubinin, F.; Dubreuil, A.; Duchovni, E.; Duckeck, G.; Ducourthial, A.; Ducu, O. A.; Duda, D.; Dudarev, A.; Dudder, A. Chr.; Duffield, E. M.; Duflot, L.; Dührssen, M.; Dulsen, C.; Dumancic, M.; Dumitriu, A. E.; Duncan, A. K.; Dunford, M.; Duperrin, A.; Duran Yildiz, H.; Düren, M.; Durglishvili, A.; Duschinger, D.; Dutta, B.; Duvnjak, D.; Dyndal, M.; Dziedzic, B. S.; Eckardt, C.; Ecker, K. M.; Edgar, R. C.; Eifert, T.; Eigen, G.; Einsweiler, K.; Ekelof, T.; El Kacimi, M.; El Kosseifi, R.; Ellajosyula, V.; Ellert, M.; Elles, S.; Ellinghaus, F.; Elliot, A. A.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Enari, Y.; Ennis, J. S.; Epland, M. B.; Erdmann, J.; Ereditato, A.; Ernst, M.; Errede, S.; Escalier, M.; Escobar, C.; Esposito, B.; Estrada Pastor, O.; Etienvre, A. I.; Etzion, E.; Evans, H.; Ezhilov, A.; Ezzi, M.; Fabbri, F.; Fabbri, L.; Fabiani, V.; Facini, G.; Fakhrutdinov, R. M.; Falciano, S.; Falla, R. J.; Faltova, J.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farina, C.; Farina, E. M.; Farooque, T.; Farrell, S.; Farrington, S. M.; Farthouat, P.; Fassi, F.; Fassnacht, P.; Fassouliotis, D.; Faucci Giannelli, M.; Favareto, A.; Fawcett, W. J.; Fayard, L.; Fedin, O. L.; Fedorko, W.; Feigl, S.; Feligioni, L.; Feng, C.; Feng, E. J.; Fenton, M. J.; Fenyuk, A. B.; Feremenga, L.; Fernandez Martinez, P.; Ferrando, J.; Ferrari, A.; Ferrari, P.; Ferrari, R.; Ferreira de Lima, D. E.; Ferrer, A.; Ferrere, D.; Ferretti, C.; Fiedler, F.; Filipčič, A.; Filipuzzi, M.; Filthaut, F.; Fincke-Keeler, M.; Finelli, K. D.; Fiolhais, M. C. N.; Fiorini, L.; Fischer, A.; Fischer, C.; Fischer, J.; Fisher, W. C.; Flaschel, N.; Fleck, I.; Fleischmann, P.; Fletcher, R. R. M.; Flick, T.; Flierl, B. M.; Flores Castillo, L. R.; Flowerdew, M. J.; Forcolin, G. T.; Formica, A.; Förster, F. A.; Forti, A.; Foster, A. G.; Fournier, D.; Fox, H.; Fracchia, S.; Francavilla, P.; Franchini, M.; Franchino, S.; Francis, D.; Franconi, L.; Franklin, M.; Frate, M.; Fraternali, M.; Freeborn, D.; Fressard-Batraneanu, S. M.; Freund, B.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Fusayasu, T.; Fuster, J.; Gabizon, O.; Gabrielli, A.; Gabrielli, A.; Gach, G. P.; Gadatsch, S.; Gadomski, S.; Gagliardi, G.; Gagnon, L. G.; Galea, C.; Galhardo, B.; Gallas, E. J.; Gallop, B. J.; Gallus, P.; Galster, G.; Gan, K. K.; Ganguly, S.; Gao, Y.; Gao, Y. S.; Garay Walls, F. M.; García, C.; García Navarro, J. E.; García Pascual, J. A.; Garcia-Sciveres, M.; Gardner, R. W.; Garelli, N.; Garonne, V.; Gascon Bravo, A.; Gasnikova, K.; Gatti, C.; Gaudiello, A.; Gaudio, G.; Gavrilenko, I. L.; Gay, C.; Gaycken, G.; Gazis, E. N.; Gee, C. N. P.; Geisen, J.; Geisen, M.; Geisler, M. P.; Gellerstedt, K.; Gemme, C.; Genest, M. H.; Geng, C.; Gentile, S.; Gentsos, C.; George, S.; Gerbaudo, D.; Geßner, G.; Ghasemi, S.; Ghneimat, M.; Giacobbe, B.; Giagu, S.; Giangiacomi, N.; Giannetti, P.; Gibson, S. M.; Gignac, M.; Gilchriese, M.; Gillberg, D.; Gilles, G.; Gingrich, D. M.; Giordani, M. P.; Giorgi, F. M.; Giraud, P. F.; Giromini, P.; Giugliarelli, G.; Giugni, D.; Giuli, F.; Giuliani, C.; Giulini, M.; Gjelsten, B. K.; Gkaitatzis, S.; Gkialas, I.; Gkougkousis, E. L.; Gkountoumis, P.; Gladilin, L. K.; Glasman, C.; Glatzer, J.; Glaysher, P. C. F.; Glazov, A.; Goblirsch-Kolb, M.; Godlewski, J.; Goldfarb, S.; Golling, T.; Golubkov, D.; Gomes, A.; Gonçalo, R.; Goncalves Gama, R.; Goncalves Pinto Firmino Da Costa, J.; Gonella, G.; Gonella, L.; Gongadze, A.; Gonski, J. L.; González de la Hoz, S.; Gonzalez-Sevilla, S.; Goossens, L.; Gorbounov, P. A.; Gordon, H. A.; Gorelov, I.; Gorini, B.; Gorini, E.; Gorišek, A.; Goshaw, A. T.; Gössling, C.; Gostkin, M. I.; Gottardo, C. A.; Goudet, C. R.; Goujdami, D.; Goussiou, A. G.; Govender, N.; Gozani, E.; Grabowska-Bold, I.; Gradin, P. O. J.; Gramling, J.; Gramstad, E.; Grancagnolo, S.; Gratchev, V.; Gravila, P. M.; Gray, C.; Gray, H. M.; Greenwood, Z. D.; Grefe, C.; Gregersen, K.; Gregor, I. M.; Grenier, P.; Grevtsov, K.; Griffiths, J.; Grillo, A. A.; Grimm, K.; Grinstein, S.; Gris, Ph.; Grivaz, J.-F.; Groh, S.; Gross, E.; Grosse-Knetter, J.; Grossi, G. C.; Grout, Z. J.; Grummer, A.; Guan, L.; Guan, W.; Guenther, J.; Guescini, F.; Guest, D.; Gueta, O.; Gui, B.; Guido, E.; Guillemin, T.; Guindon, S.; Gul, U.; Gumpert, C.; Guo, J.; Guo, W.; Guo, Y.; Gupta, R.; Gurbuz, S.; Gustavino, G.; Gutelman, B. J.; Gutierrez, P.; Gutierrez Ortiz, N. G.; Gutschow, C.; Guyot, C.; Guzik, M. P.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haber, C.; Hadavand, H. K.; Haddad, N.; Hadef, A.; Hageböck, S.; Hagihara, M.; Hakobyan, H.; Haleem, M.; Haley, J.; Halladjian, G.; Hallewell, G. D.; Hamacher, K.; Hamal, P.; Hamano, K.; Hamilton, A.; Hamity, G. N.; Hamnett, P. G.; Han, L.; Han, S.; Hanagaki, K.; Hanawa, K.; Hance, M.; Handl, D. M.; Haney, B.; Hanke, P.; Hansen, J. B.; Hansen, J. D.; Hansen, M. C.; Hansen, P. H.; Hara, K.; Hard, A. S.; Harenberg, T.; Hariri, F.; Harkusha, S.; Harrison, P. F.; Hartmann, N. M.; Hasegawa, Y.; Hasib, A.; Hassani, S.; Haug, S.; Hauser, R.; Hauswald, L.; Havener, L. B.; Havranek, M.; Hawkes, C. M.; Hawkings, R. J.; Hayakawa, D.; Hayden, D.; Hays, C. P.; Hays, J. M.; Hayward, H. S.; Haywood, S. J.; Head, S. J.; Heck, T.; Hedberg, V.; Heelan, L.; Heer, S.; Heidegger, K. K.; Heim, S.; Heim, T.; Heinemann, B.; Heinrich, J. J.; Heinrich, L.; Heinz, C.; Hejbal, J.; Helary, L.; Held, A.; Hellman, S.; Helsens, C.; Henderson, R. C. W.; Heng, Y.; Henkelmann, S.; Henriques Correia, A. M.; Henrot-Versille, S.; Herbert, G. H.; Herde, H.; Herget, V.; Hernández Jiménez, Y.; Herr, H.; Herten, G.; Hertenberger, R.; Hervas, L.; Herwig, T. C.; Hesketh, G. G.; Hessey, N. P.; Hetherly, J. W.; Higashino, S.; Higón-Rodriguez, E.; Hildebrand, K.; Hill, E.; Hill, J. C.; Hiller, K. H.; Hillier, S. J.; Hils, M.; Hinchliffe, I.; Hirose, M.; Hirschbuehl, D.; Hiti, B.; Hladik, O.; Hlaluku, D. R.; Hoad, X.; Hobbs, J.; Hod, N.; Hodgkinson, M. C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M. R.; Hoenig, F.; Hohn, D.; Holmes, T. R.; Holzbock, M.; Homann, M.; Honda, S.; Honda, T.; Hong, T. M.; Hooberman, B. H.; Hopkins, W. H.; Horii, Y.; Horton, A. J.; Hostachy, J.-Y.; Hostiuc, A.; Hou, S.; Hoummada, A.; Howarth, J.; Hoya, J.; Hrabovsky, M.; Hrdinka, J.; Hristova, I.; Hrivnac, J.; Hryn'ova, T.; Hrynevich, A.; Hsu, P. J.; Hsu, S.-C.; Hu, Q.; Hu, S.; Huang, Y.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Huffman, T. B.; Hughes, E. W.; Huhtinen, M.; Hunter, R. F. H.; Huo, P.; Huseynov, N.; Huston, J.; Huth, J.; Hyneman, R.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Idrissi, Z.; Iengo, P.; Igonkina, O.; Iizawa, T.; Ikegami, Y.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Ilic, N.; Iltzsche, F.; Introzzi, G.; Ioannou, P.; Iodice, M.; Iordanidou, K.; Ippolito, V.; Isacson, M. F.; Ishijima, N.; Ishino, M.; Ishitsuka, M.; Issever, C.; Istin, S.; Ito, F.; Iturbe Ponce, J. M.; Iuppa, R.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jabbar, S.; Jackson, P.; Jacobs, R. M.; Jain, V.; Jakobi, K. B.; Jakobs, K.; Jakobsen, S.; Jakoubek, T.; Jamin, D. O.; Jana, D. K.; Jansky, R.; Janssen, J.; Janus, M.; Janus, P. A.; Jarlskog, G.; Javadov, N.; Javůrek, T.; Javurkova, M.; Jeanneau, F.; Jeanty, L.; Jejelava, J.; Jelinskas, A.; Jenni, P.; Jeske, C.; Jézéquel, S.; Ji, H.; Jia, J.; Jiang, H.; Jiang, Y.; Jiang, Z.; Jiggins, S.; Jimenez Pena, J.; Jin, S.; Jinaru, A.; Jinnouchi, O.; Jivan, H.; Johansson, P.; Johns, K. A.; Johnson, C. A.; Johnson, W. J.; Jon-And, K.; Jones, R. W. L.; Jones, S. D.; Jones, S.; Jones, T. J.; Jongmanns, J.; Jorge, P. M.; Jovicevic, J.; Ju, X.; Juste Rozas, A.; Köhler, M. K.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagan, M.; Kahn, S. J.; Kaji, T.; Kajomovitz, E.; Kalderon, C. W.; Kaluza, A.; Kama, S.; Kamenshchikov, A.; Kanaya, N.; Kanjir, L.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kaplan, L. S.; Kar, D.; Karakostas, K.; Karastathis, N.; Kareem, M. J.; Karentzos, E.; Karpov, S. N.; Karpova, Z. M.; Karthik, K.; Kartvelishvili, V.; Karyukhin, A. N.; Kasahara, K.; Kashif, L.; Kass, R. D.; Kastanas, A.; Kataoka, Y.; Kato, C.; Katre, A.; Katzy, J.; Kawade, K.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kay, E. F.; Kazanin, V. F.; Keeler, R.; Kehoe, R.; Keller, J. S.; Kellermann, E.; Kempster, J. J.; Kendrick, J.; Keoshkerian, H.; Kepka, O.; Kerševan, B. P.; Kersten, S.; Keyes, R. A.; Khader, M.; Khalil-zada, F.; Khanov, A.; Kharlamov, A. G.; Kharlamova, T.; Khodinov, A.; Khoo, T. J.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kido, S.; Kilby, C. R.; Kim, H. Y.; Kim, S. H.; Kim, Y. K.; Kimura, N.; Kind, O. M.; King, B. T.; Kirchmeier, D.; Kirk, J.; Kiryunin, A. E.; Kishimoto, T.; Kisielewska, D.; Kitali, V.; Kivernyk, O.; Kladiva, E.; Klapdor-Kleingrothaus, T.; Klein, M. H.; Klein, M.; Klein, U.; Kleinknecht, K.; Klimek, P.; Klimentov, A.; Klingenberg, R.; Klingl, T.; Klioutchnikova, T.; Klitzner, F. F.; Kluge, E.-E.; Kluit, P.; Kluth, S.; Kneringer, E.; Knoops, E. B. F. G.; Knue, A.; Kobayashi, A.; Kobayashi, D.; Kobayashi, T.; Kobel, M.; Kocian, M.; Kodys, P.; Koffas, T.; Koffeman, E.; Köhler, N. M.; Koi, T.; Kolb, M.; Koletsou, I.; Kondo, T.; Kondrashova, N.; Köneke, K.; König, A. C.; Kono, T.; Konoplich, R.; Konstantinidis, N.; Konya, B.; Kopeliansky, R.; Koperny, S.; Kopp, A. K.; Korcyl, K.; Kordas, K.; Korn, A.; Korol, A. A.; Korolkov, I.; Korolkova, E. V.; Kortner, O.; Kortner, S.; Kosek, T.; Kostyukhin, V. V.; Kotwal, A.; Koulouris, A.; Kourkoumeli-Charalampidi, A.; Kourkoumelis, C.; Kourlitis, E.; Kouskoura, V.; Kowalewska, A. B.; Kowalewski, R.; Kowalski, T. Z.; Kozakai, C.; Kozanecki, W.; Kozhin, A. S.; Kramarenko, V. A.; Kramberger, G.; Krasnopevtsev, D.; Krasny, M. W.; Krasznahorkay, A.; Krauss, D.; Kremer, J. A.; Kretzschmar, J.; Kreutzfeldt, K.; Krieger, P.; Krizka, K.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Krüger, H.; Krumnack, N.; Kruse, M. C.; Kubota, T.; Kucuk, H.; Kuday, S.; Kuechler, J. T.; Kuehn, S.; Kugel, A.; Kuger, F.; Kuhl, T.; Kukhtin, V.; Kukla, R.; Kulchitsky, Y.; Kuleshov, S.; Kulinich, Y. P.; Kuna, M.; Kunigo, T.; Kupco, A.; Kupfer, T.; Kuprash, O.; Kurashige, H.; Kurchaninov, L. L.; Kurochkin, Y. A.; Kurth, M. G.; Kuwertz, E. S.; Kuze, M.; Kvita, J.; Kwan, T.; Kyriazopoulos, D.; La Rosa, A.; La Rosa Navarro, J. L.; La Rotonda, L.; La Ruffa, F.; Lacasta, C.; Lacava, F.; Lacey, J.; Lack, D. P. J.; Lacker, H.; Lacour, D.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Lammers, S.; Lampl, W.; Lançon, E.; Landgraf, U.; Landon, M. P. J.; Lanfermann, M. C.; Lang, V. S.; Lange, J. C.; Langenberg, R. J.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Lanza, A.; Lapertosa, A.; Laplace, S.; Laporte, J. F.; Lari, T.; Lasagni Manghi, F.; Lassnig, M.; Lau, T. S.; Laurelli, P.; Lavrijsen, W.; Law, A. T.; Laycock, P.; Lazovich, T.; Lazzaroni, M.; Le, B.; Le Dortz, O.; Le Guirriec, E.; Le Quilleuc, E. P.; LeBlanc, M.; LeCompte, T.; Ledroit-Guillon, F.; Lee, C. A.; Lee, G. R.; Lee, S. C.; Lee, L.; Lefebvre, B.; Lefebvre, G.; Lefebvre, M.; Legger, F.; Leggett, C.; Lehmann Miotto, G.; Lei, X.; Leight, W. A.; Leite, M. A. L.; Leitner, R.; Lellouch, D.; Lemmer, B.; Leney, K. J. C.; Lenz, T.; Lenzi, B.; Leone, R.; Leone, S.; Leonidopoulos, C.; Lerner, G.; Leroy, C.; Les, R.; Lesage, A. A. J.; Lester, C. G.; Levchenko, M.; Levêque, J.; Levin, D.; Levinson, L. J.; Levy, M.; Lewis, D.; Li, B.; Li, H.; Li, L.; Li, Q.; Li, Q.; Li, S.; Li, X.; Li, Y.; Liang, Z.; Liberti, B.; Liblong, A.; Lie, K.; Liebal, J.; Liebig, W.; Limosani, A.; Lin, C. Y.; Lin, K.; Lin, S. C.; Lin, T. H.; Linck, R. A.; Lindquist, B. E.; Lionti, A. E.; Lipeles, E.; Lipniacka, A.; Lisovyi, M.; Liss, T. M.; Lister, A.; Litke, A. M.; Liu, B.; Liu, H.; Liu, H.; Liu, J. K. K.; Liu, J.; Liu, J. B.; Liu, K.; Liu, L.; Liu, M.; Liu, Y. L.; Liu, Y.; Livan, M.; Lleres, A.; Llorente Merino, J.; Lloyd, S. L.; Lo, C. Y.; Lo Sterzo, F.; Lobodzinska, E. M.; Loch, P.; Loebinger, F. K.; Loesle, A.; Loew, K. M.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Long, B. A.; Long, J. D.; Long, R. E.; Longo, L.; Looper, K. A.; Lopez, J. A.; Lopez Paz, I.; Lopez Solis, A.; Lorenz, J.; Lorenzo Martinez, N.; Losada, M.; Lösel, P. J.; Lou, X.; Lounis, A.; Love, J.; Love, P. A.; Lu, H.; Lu, N.; Lu, Y. J.; Lubatti, H. J.; Luci, C.; Lucotte, A.; Luedtke, C.; Luehring, F.; Lukas, W.; Luminari, L.; Lundberg, O.; Lund-Jensen, B.; Lutz, M. S.; Luzi, P. M.; Lynn, D.; Lysak, R.; Lytken, E.; Lyu, F.; Lyubushkin, V.; Ma, H.; Ma, L. L.; Ma, Y.; Maccarrone, G.; Macchiolo, A.; Macdonald, C. M.; Maček, B.; Machado Miguens, J.; Madaffari, D.; Madar, R.; Mader, W. F.; Madsen, A.; Madysa, N.; Maeda, J.; Maeland, S.; Maeno, T.; Maevskiy, A. S.; Magerl, V.; Maiani, C.; Maidantchik, C.; Maier, T.; Maio, A.; Majersky, O.; Majewski, S.; Makida, Y.; Makovec, N.; Malaescu, B.; Malecki, Pa.; Maleev, V. P.; Malek, F.; Mallik, U.; Malon, D.; Malone, C.; Maltezos, S.; Malyukov, S.; Mamuzic, J.; Mancini, G.; Mandić, I.; Maneira, J.; Manhaes de Andrade Filho, L.; Manjarres Ramos, J.; Mankinen, K. H.; Mann, A.; Manousos, A.; Mansoulie, B.; Mansour, J. D.; Mantifel, R.; Mantoani, M.; Manzoni, S.; Mapelli, L.; Marceca, G.; March, L.; Marchese, L.; Marchiori, G.; Marcisovsky, M.; Marin Tobon, C. A.; Marjanovic, M.; Marley, D. E.; Marroquim, F.; Marsden, S. P.; Marshall, Z.; Martensson, M. U. F.; Marti-Garcia, S.; Martin, C. B.; Martin, T. A.; Martin, V. J.; Martin dit Latour, B.; Martinez, M.; Martinez Outschoorn, V. I.; Martin-Haugh, S.; Martoiu, V. S.; Martyniuk, A. C.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Mason, L. H.; Massa, L.; Mastrandrea, P.; Mastroberardino, A.; Masubuchi, T.; Mättig, P.; Maurer, J.; Maxfield, S. J.; Maximov, D. A.; Mazini, R.; Maznas, I.; Mazza, S. M.; Mc Fadden, N. C.; Mc Goldrick, G.; Mc Kee, S. P.; McCarn, A.; McCarthy, R. L.; McCarthy, T. G.; McClymont, L. I.; McDonald, E. F.; Mcfayden, J. A.; Mchedlidze, G.; McMahon, S. J.; McNamara, P. C.; McNicol, C. J.; McPherson, R. A.; Meehan, S.; Megy, T. J.; Mehlhase, S.; Mehta, A.; Meideck, T.; Meier, K.; Meirose, B.; Melini, D.; Mellado Garcia, B. R.; Mellenthin, J. D.; Melo, M.; Meloni, F.; Melzer, A.; Menary, S. B.; Meng, L.; Meng, X. T.; Mengarelli, A.; Menke, S.; Meoni, E.; Mergelmeyer, S.; Merlassino, C.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F. S.; Messina, A.; Metcalfe, J.; Mete, A. S.; Meyer, C.; Meyer, J.-P.; Meyer, J.; Meyer Zu Theenhausen, H.; Miano, F.; Middleton, R. P.; Miglioranzi, S.; Mijović, L.; Mikenberg, G.; Mikestikova, M.; Mikuž, M.; Milesi, M.; Milic, A.; Millar, D. A.; Miller, D. W.; Mills, C.; Milov, A.; Milstead, D. A.; Minaenko, A. A.; Minami, Y.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Minegishi, Y.; Ming, Y.; Mir, L. M.; Mirto, A.; Mistry, K. P.; Mitani, T.; Mitrevski, J.; Mitsou, V. A.; Miucci, A.; Miyagawa, P. S.; Mizukami, A.; Mjörnmark, J. U.; Mkrtchyan, T.; Mlynarikova, M.; Moa, T.; Mochizuki, K.; Mogg, P.; Mohapatra, S.; Molander, S.; Moles-Valls, R.; Mondragon, M. C.; Mönig, K.; Monk, J.; Monnier, E.; Montalbano, A.; Montejo Berlingen, J.; Monticelli, F.; Monzani, S.; Moore, R. W.; Morange, N.; Moreno, D.; Moreno Llácer, M.; Morettini, P.; Morgenstern, S.; Mori, D.; Mori, T.; Morii, M.; Morinaga, M.; Morisbak, V.; Morley, A. K.; Mornacchi, G.; Morris, J. D.; Morvaj, L.; Moschovakos, P.; Mosidze, M.; Moss, H. J.; Moss, J.; Motohashi, K.; Mount, R.; Mountricha, E.; Moyse, E. J. W.; Muanza, S.; Mueller, F.; Mueller, J.; Mueller, R. S. P.; Muenstermann, D.; Mullen, P.; Mullier, G. A.; Munoz Sanchez, F. J.; Murray, W. J.; Musheghyan, H.; Muškinja, M.; Myagkov, A. G.; Myska, M.; Nachman, B. P.; Nackenhorst, O.; Nagai, K.; Nagai, R.; Nagano, K.; Nagasaka, Y.; Nagata, K.; Nagel, M.; Nagy, E.; Nairz, A. M.; Nakahama, Y.; Nakamura, K.; Nakamura, T.; Nakano, I.; Naranjo Garcia, R. F.; Narayan, R.; Narrias Villar, D. I.; Naryshkin, I.; Naumann, T.; Navarro, G.; Nayyar, R.; Neal, H. A.; Nechaeva, P. Yu.; Neep, T. J.; Negri, A.; Negrini, M.; Nektarijevic, S.; Nellist, C.; Nelson, A.; Nelson, M. E.; Nemecek, S.; Nemethy, P.; Nessi, M.; Neubauer, M. S.; Neumann, M.; Newman, P. R.; Ng, T. Y.; Ng, Y. S.; Nguyen Manh, T.; Nickerson, R. B.; Nicolaidou, R.; Nielsen, J.; Nikiforou, N.; Nikolaenko, V.; Nikolic-Audit, I.; Nikolopoulos, K.; Nilsson, P.; Ninomiya, Y.; Nisati, A.; Nishu, N.; Nisius, R.; Nitsche, I.; Nitta, T.; Nobe, T.; Noguchi, Y.; Nomachi, M.; Nomidis, I.; Nomura, M. A.; Nooney, T.; Nordberg, M.; Norjoharuddeen, N.; Novgorodova, O.; Nozaki, M.; Nozka, L.; Ntekas, K.; Nurse, E.; Nuti, F.; O'connor, K.; O'Neil, D. C.; O'Rourke, A. A.; O'Shea, V.; Oakham, F. G.; Oberlack, H.; Obermann, T.; Ocariz, J.; Ochi, A.; Ochoa, I.; Ochoa-Ricoux, J. P.; Oda, S.; Odaka, S.; Oh, A.; Oh, S. H.; Ohm, C. C.; Ohman, H.; Oide, H.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olariu, A.; Oleiro Seabra, L. F.; Olivares Pino, S. A.; Oliveira Damazio, D.; Olsson, M. J. R.; Olszewski, A.; Olszowska, J.; Onofre, A.; Onogi, K.; Onyisi, P. U. E.; Oppen, H.; Oreglia, M. J.; Oren, Y.; Orestano, D.; Orlando, N.; Orr, R. S.; Osculati, B.; Ospanov, R.; Otero y Garzon, G.; Otono, H.; Ouchrif, M.; Ould-Saada, F.; Ouraou, A.; Oussoren, K. P.; Ouyang, Q.; Owen, M.; Owen, R. E.; Ozcan, V. E.; Ozturk, N.; Pachal, K.; Pacheco Pages, A.; Pacheco Rodriguez, L.; Padilla Aranda, C.; Pagan Griso, S.; Paganini, M.; Paige, F.; Palacino, G.; Palazzo, S.; Palestini, S.; Palka, M.; Pallin, D.; Panagiotopoulou, E. St.; Panagoulias, I.; Pandini, C. E.; Panduro Vazquez, J. G.; Pani, P.; Panitkin, S.; Pantea, D.; Paolozzi, L.; Papadopoulou, Th. D.; Papageorgiou, K.; Paramonov, A.; Paredes Hernandez, D.; Parker, A. J.; Parker, M. A.; Parker, K. A.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pascuzzi, V. R.; Pasner, J. M.; Pasqualucci, E.; Passaggio, S.; Pastore, Fr.; Pataraia, S.; Pater, J. R.; Pauly, T.; Pearson, B.; Pedraza Lopez, S.; Pedro, R.; Peleganchuk, S. V.; Penc, O.; Peng, C.; Peng, H.; Penwell, J.; Peralva, B. S.; Perego, M. M.; Perepelitsa, D. V.; Peri, F.; Perini, L.; Pernegger, H.; Perrella, S.; Peschke, R.; Peshekhonov, V. D.; Peters, K.; Peters, R. F. Y.; Petersen, B. A.; Petersen, T. C.; Petit, E.; Petridis, A.; Petridou, C.; Petroff, P.; Petrolo, E.; Petrov, M.; Petrucci, F.; Pettersson, N. E.; Peyaud, A.; Pezoa, R.; Phillips, F. H.; Phillips, P. W.; Piacquadio, G.; Pianori, E.; Picazio, A.; Pickering, M. A.; Piegaia, R.; Pilcher, J. E.; Pilkington, A. D.; Pinamonti, M.; Pinfold, J. L.; Pirumov, H.; Pitt, M.; Plazak, L.; Pleier, M.-A.; Pleskot, V.; Plotnikova, E.; Pluth, D.; Podberezko, P.; Poettgen, R.; Poggi, R.; Poggioli, L.; Pogrebnyak, I.; Pohl, D.; Pokharel, I.; Polesello, G.; Poley, A.; Policicchio, A.; Polifka, R.; Polini, A.; Pollard, C. S.; Polychronakos, V.; Pommès, K.; Ponomarenko, D.; Pontecorvo, L.; Popeneciu, G. A.; Portillo Quintero, D. M.; Pospisil, S.; Potamianos, K.; Potrap, I. N.; Potter, C. J.; Potti, H.; Poulsen, T.; Poveda, J.; Pozo Astigarraga, M. E.; Pralavorio, P.; Pranko, A.; Prell, S.; Price, D.; Primavera, M.; Prince, S.; Proklova, N.; Prokofiev, K.; Prokoshin, F.; Protopopescu, S.; Proudfoot, J.; Przybycien, M.; Puri, A.; Puzo, P.; Qian, J.; Qin, G.; Qin, Y.; Quadt, A.; Queitsch-Maitland, M.; Quilty, D.; Raddum, S.; Radeka, V.; Radescu, V.; Radhakrishnan, S. K.; Radloff, P.; Rados, P.; Ragusa, F.; Rahal, G.; Raine, J. A.; Rajagopalan, S.; Rangel-Smith, C.; Rashid, T.; Raspopov, S.; Ratti, M. G.; Rauch, D. M.; Rauscher, F.; Rave, S.; Ravinovich, I.; Rawling, J. H.; Raymond, M.; Read, A. L.; Readioff, N. P.; Reale, M.; Rebuzzi, D. M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reed, R. G.; Reeves, K.; Rehnisch, L.; Reichert, J.; Reiss, A.; Rembser, C.; Ren, H.; Rescigno, M.; Resconi, S.; Resseguie, E. D.; Rettie, S.; Reynolds, E.; Rezanova, O. L.; Reznicek, P.; Rezvani, R.; Richter, R.; Richter, S.; Richter-Was, E.; Ricken, O.; Ridel, M.; Rieck, P.; Riegel, C. J.; Rieger, J.; Rifki, O.; Rijssenbeek, M.; Rimoldi, A.; Rimoldi, M.; Rinaldi, L.; Ripellino, G.; Ristić, B.; Ritsch, E.; Riu, I.; Rizatdinova, F.; Rizvi, E.; Rizzi, C.; Roberts, R. T.; Robertson, S. H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, J. E. M.; Robson, A.; Rocco, E.; Roda, C.; Rodina, Y.; Rodriguez Bosca, S.; Rodriguez Perez, A.; Rodriguez Rodriguez, D.; Roe, S.; Rogan, C. S.; Røhne, O.; Roloff, J.; Romaniouk, A.; Romano, M.; Romano Saez, S. M.; Romero Adam, E.; Rompotis, N.; Ronzani, M.; Roos, L.; Rosati, S.; Rosbach, K.; Rose, P.; Rosien, N.-A.; Rossi, E.; Rossi, L. P.; Rosten, J. H. N.; Rosten, R.; Rotaru, M.; Rothberg, J.; Rousseau, D.; Roy, D.; Rozanov, A.; Rozen, Y.; Ruan, X.; Rubbo, F.; Rühr, F.; Ruiz-Martinez, A.; Rurikova, Z.; Rusakovich, N. A.; Russell, H. L.; Rutherfoord, J. P.; Ruthmann, N.; Rüttinger, E. M.; Ryabov, Y. F.; Rybar, M.; Rybkin, G.; Ryu, S.; Ryzhov, A.; Rzehorz, G. F.; Saavedra, A. F.; Sabato, G.; Sacerdoti, S.; Sadrozinski, H. F.-W.; Sadykov, R.; Safai Tehrani, F.; Saha, P.; Sahinsoy, M.; Saimpert, M.; Saito, M.; Saito, T.; Sakamoto, H.; Sakurai, Y.; Salamanna, G.; Salazar Loyola, J. E.; Salek, D.; Sales De Bruin, P. H.; Salihagic, D.; Salnikov, A.; Salt, J.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sammel, D.; Sampsonidis, D.; Sampsonidou, D.; Sánchez, J.; Sanchez Martinez, V.; Sanchez Pineda, A.; Sandaker, H.; Sandbach, R. L.; Sander, C. O.; Sandhoff, M.; Sandoval, C.; Sankey, D. P. C.; Sannino, M.; Sano, Y.; Sansoni, A.; Santoni, C.; Santos, H.; Santoyo Castillo, I.; Sapronov, A.; Saraiva, J. G.; Sarrazin, B.; Sasaki, O.; Sato, K.; Sauvan, E.; Savage, G.; Savard, P.; Savic, N.; Sawyer, C.; Sawyer, L.; Saxon, J.; Sbarra, C.; Sbrizzi, A.; Scanlon, T.; Scannicchio, D. A.; Schaarschmidt, J.; Schacht, P.; Schachtner, B. M.; Schaefer, D.; Schaefer, L.; Schaefer, R.; Schaeffer, J.; Schaepe, S.; Schaetzel, S.; Schäfer, U.; Schaffer, A. C.; Schaile, D.; Schamberger, R. D.; Schegelsky, V. A.; Scheirich, D.; Schenck, F.; Schernau, M.; Schiavi, C.; Schier, S.; Schildgen, L. K.; Schillo, C.; Schioppa, M.; Schlenker, S.; Schmidt-Sommerfeld, K. R.; Schmieden, K.; Schmitt, C.; Schmitt, S.; Schmitz, S.; Schnoor, U.; Schoeffel, L.; Schoening, A.; Schoenrock, B. D.; Schopf, E.; Schott, M.; Schouwenberg, J. F. P.; Schovancova, J.; Schramm, S.; Schuh, N.; Schulte, A.; Schultens, M. J.; Schultz-Coulon, H.-C.; Schulz, H.; Schumacher, M.; Schumm, B. A.; Schune, Ph.; Schwartzman, A.; Schwarz, T. A.; Schweiger, H.; Schwemling, Ph.; Schwienhorst, R.; Schwindling, J.; Sciandra, A.; Sciolla, G.; Scornajenghi, M.; Scuri, F.; Scutti, F.; Searcy, J.; Seema, P.; Seidel, S. C.; Seiden, A.; Seixas, J. M.; Sekhniaidze, G.; Sekhon, K.; Sekula, S. J.; Semprini-Cesari, N.; Senkin, S.; Serfon, C.; Serin, L.; Serkin, L.; Sessa, M.; Seuster, R.; Severini, H.; Šfiligoj, T.; Sforza, F.; Sfyrla, A.; Shabalina, E.; Shaikh, N. W.; Shan, L. Y.; Shang, R.; Shank, J. T.; Shapiro, M.; Shatalov, P. B.; Shaw, K.; Shaw, S. M.; Shcherbakova, A.; Shehu, C. Y.; Shen, Y.; Sherafati, N.; Sherman, A. D.; Sherwood, P.; Shi, L.; Shimizu, S.; Shimmin, C. O.; Shimojima, M.; Shipsey, I. P. J.; Shirabe, S.; Shiyakova, M.; Shlomi, J.; Shmeleva, A.; Shoaleh Saadi, D.; Shochet, M. J.; Shojaii, S.; Shope, D. R.; Shrestha, S.; Shulga, E.; Shupe, M. A.; Sicho, P.; Sickles, A. M.; Sidebo, P. E.; Sideras Haddad, E.; Sidiropoulou, O.; Sidoti, A.; Siegert, F.; Sijacki, Dj.; Silva, J.; Silverstein, S. B.; Simak, V.; Simic, L.; Simion, S.; Simioni, E.; Simmons, B.; Simon, M.; Sinervo, P.; Sinev, N. B.; Sioli, M.; Siragusa, G.; Siral, I.; Sivoklokov, S. Yu.; Sjölin, J.; Skinner, M. B.; Skubic, P.; Slater, M.; Slavicek, T.; Slawinska, M.; Sliwa, K.; Slovak, R.; Smakhtin, V.; Smart, B. H.; Smiesko, J.; Smirnov, N.; Smirnov, S. Yu.; Smirnov, Y.; Smirnova, L. N.; Smirnova, O.; Smith, J. W.; Smith, M. N. K.; Smith, R. W.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snyder, I. M.; Snyder, S.; Sobie, R.; Socher, F.; Soffer, A.; Søgaard, A.; Soh, D. A.; Sokhrannyi, G.; Solans Sanchez, C. A.; Solar, M.; Soldatov, E. Yu.; Soldevila, U.; Solodkov, A. A.; Soloshenko, A.; Solovyanov, O. V.; Solovyev, V.; Sommer, P.; Son, H.; Sopczak, A.; Sosa, D.; Sotiropoulou, C. L.; Sottocornola, S.; Soualah, R.; Soukharev, A. M.; South, D.; Sowden, B. C.; Spagnolo, S.; Spalla, M.; Spangenberg, M.; Spanò, F.; Sperlich, D.; Spettel, F.; Spieker, T. M.; Spighi, R.; Spigo, G.; Spiller, L. A.; Spousta, M.; Denis, R. D. St.; Stabile, A.; Stamen, R.; Stamm, S.; Stanecka, E.; Stanek, R. W.; Stanescu, C.; Stanitzki, M. M.; Stapf, B. S.; Stapnes, S.; Starchenko, E. A.; Stark, G. H.; Stark, J.; Stark, S. H.; Staroba, P.; Starovoitov, P.; Stärz, S.; Staszewski, R.; Stegler, M.; Steinberg, P.; Stelzer, B.; Stelzer, H. J.; Stelzer-Chilton, O.; Stenzel, H.; Stevenson, T. J.; Stewart, G. A.; Stockton, M. C.; Stoebe, M.; Stoicea, G.; Stolte, P.; Stonjek, S.; Stradling, A. R.; Straessner, A.; Stramaglia, M. E.; Strandberg, J.; Strandberg, S.; Strauss, M.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Stroynowski, R.; Strubig, A.; Stucci, S. A.; Stugu, B.; Styles, N. A.; Su, D.; Su, J.; Suchek, S.; Sugaya, Y.; Suk, M.; Sulin, V. V.; Sultan, DMS; Sultansoy, S.; Sumida, T.; Sun, S.; Sun, X.; Suruliz, K.; Suster, C. J. E.; Sutton, M. R.; Suzuki, S.; Svatos, M.; Swiatlowski, M.; Swift, S. P.; Sykora, I.; Sykora, T.; Ta, D.; Tackmann, K.; Taenzer, J.; Taffard, A.; Tafirout, R.; Tahirovic, E.; Taiblum, N.; Takai, H.; Takashima, R.; Takasugi, E. H.; Takeda, K.; Takeshita, T.; Takubo, Y.; Talby, M.; Talyshev, A. A.; Tanaka, J.; Tanaka, M.; Tanaka, R.; Tanaka, S.; Tanioka, R.; Tannenwald, B. B.; Tapia Araya, S.; Tapprogge, S.; Tarem, S.; Tartarelli, G. F.; Tas, P.; Tasevsky, M.; Tashiro, T.; Tassi, E.; Tavares Delgado, A.; Tayalati, Y.; Taylor, A. C.; Taylor, A. J.; Taylor, G. N.; Taylor, P. T. E.; Taylor, W.; Teixeira-Dias, P.; Temple, D.; Ten Kate, H.; Teng, P. K.; Teoh, J. J.; Tepel, F.; Terada, S.; Terashi, K.; Terron, J.; Terzo, S.; Testa, M.; Teuscher, R. J.; Thais, S. J.; Theveneaux-Pelzer, T.; Thiele, F.; Thomas, J. P.; Thomas-Wilsker, J.; Thompson, P. D.; Thompson, A. S.; Thomsen, L. A.; Thomson, E.; Tian, Y.; Tibbetts, M. J.; Ticse Torres, R. E.; Tikhomirov, V. O.; Tikhonov, Yu. A.; Timoshenko, S.; Tipton, P.; Tisserant, S.; Todome, K.; Todorova-Nova, S.; Todt, S.; Tojo, J.; Tokár, S.; Tokushuku, K.; Tolley, E.; Tomlinson, L.; Tomoto, M.; Tompkins, L.; Toms, K.; Tong, B.; Tornambe, P.; Torrence, E.; Torres, H.; Torró Pastor, E.; Toth, J.; Touchard, F.; Tovey, D. R.; Treado, C. J.; Trefzger, T.; Tresoldi, F.; Tricoli, A.; Trigger, I. M.; Trincaz-Duvoid, S.; Tripiana, M. F.; Trischuk, W.; Trocmé, B.; Trofymov, A.; Troncon, C.; Trottier-McDonald, M.; Trovatelli, M.; Truong, L.; Trzebinski, M.; Trzupek, A.; Tsang, K. W.; Tseng, J. C.-L.; Tsiareshka, P. V.; Tsirintanis, N.; Tsiskaridze, S.; Tsiskaridze, V.; Tskhadadze, E. G.; Tsukerman, I. I.; Tsulaia, V.; Tsuno, S.; Tsybychev, D.; Tu, Y.; Tudorache, A.; Tudorache, V.; Tulbure, T. T.; Tuna, A. N.; Turchikhin, S.; Turgeman, D.; Turk Cakir, I.; Turra, R.; Tuts, P. M.; Ucchielli, G.; Ueda, I.; Ughetto, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Ungaro, F. C.; Unno, Y.; Uno, K.; Unverdorben, C.; Urban, J.; Urquijo, P.; Urrejola, P.; Usai, G.; Usui, J.; Vacavant, L.; Vacek, V.; Vachon, B.; Vadla, K. O. H.; Vaidya, A.; Valderanis, C.; Valdes Santurio, E.; Valente, M.; Valentinetti, S.; Valero, A.; Valéry, L.; Valkar, S.; Vallier, A.; Valls Ferrer, J. A.; Van Den Wollenberg, W.; van der Graaf, H.; van Gemmeren, P.; Van Nieuwkoop, J.; van Vulpen, I.; van Woerden, M. C.; Vanadia, M.; Vandelli, W.; Vaniachine, A.; Vankov, P.; Vardanyan, G.; Vari, R.; Varnes, E. W.; Varni, C.; Varol, T.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vasquez, J. G.; Vasquez, G. A.; Vazeille, F.; Vazquez Furelos, D.; Vazquez Schroeder, T.; Veatch, J.; Veeraraghavan, V.; Veloce, L. M.; Veloso, F.; Veneziano, S.; Ventura, A.; Venturi, M.; Venturi, N.; Venturini, A.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, A. T.; Vermeulen, J. C.; Vetterli, M. C.; Viaux Maira, N.; Viazlo, O.; Vichou, I.; Vickey, T.; Vickey Boeriu, O. E.; Viehhauser, G. H. A.; Viel, S.; Vigani, L.; Villa, M.; Villaplana Perez, M.; Vilucchi, E.; Vincter, M. G.; Vinogradov, V. B.; Vishwakarma, A.; Vittori, C.; Vivarelli, I.; Vlachos, S.; Vogel, M.; Vokac, P.; Volpi, G.; von der Schmitt, H.; von Toerne, E.; Vorobel, V.; Vorobev, K.; Vos, M.; Voss, R.; Vossebeld, J. H.; Vranjes, N.; Vranjes Milosavljevic, M.; Vrba, V.; Vreeswijk, M.; Vuillermet, R.; Vukotic, I.; Wagner, P.; Wagner, W.; Wagner-Kuhr, J.; Wahlberg, H.; Wahrmund, S.; Wakamiya, K.; Walder, J.; Walker, R.; Walkowiak, W.; Wallangen, V.; Wang, C.; Wang, C.; Wang, F.; Wang, H.; Wang, H.; Wang, J.; Wang, J.; Wang, Q.; Wang, R.-J.; Wang, R.; Wang, S. M.; Wang, T.; Wang, W.; Wang, W.; Wang, Z.; Wanotayaroj, C.; Warburton, A.; Ward, C. P.; Wardrope, D. R.; Washbrook, A.; Watkins, P. M.; Watson, A. T.; Watson, M. F.; Watts, G.; Watts, S.; Waugh, B. M.; Webb, A. F.; Webb, S.; Weber, M. S.; Weber, S. M.; Weber, S. W.; Weber, S. A.; Webster, J. S.; Weidberg, A. R.; Weinert, B.; Weingarten, J.; Weirich, M.; Weiser, C.; Weits, H.; Wells, P. S.; Wenaus, T.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M. D.; Werner, P.; Wessels, M.; Weston, T. D.; Whalen, K.; Whallon, N. L.; Wharton, A. M.; White, A. S.; White, A.; White, M. J.; White, R.; Whiteson, D.; Whitmore, B. W.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wiglesworth, C.; Wiik-Fuchs, L. A. M.; Wildauer, A.; Wilk, F.; Wilkens, H. G.; Williams, H. H.; Williams, S.; Willis, C.; Willocq, S.; Wilson, J. A.; Wingerter-Seez, I.; Winkels, E.; Winklmeier, F.; Winston, O. J.; Winter, B. T.; Wittgen, M.; Wobisch, M.; Wolf, A.; Wolf, T. M. H.; Wolff, R.; Wolter, M. W.; Wolters, H.; Wong, V. W. S.; Woods, N. L.; Worm, S. D.; Wosiek, B. K.; Wotschack, J.; Wozniak, K. W.; Wu, M.; Wu, S. L.; Wu, X.; Wu, Y.; Wyatt, T. R.; Wynne, B. M.; Xella, S.; Xi, Z.; Xia, L.; Xu, D.; Xu, L.; Xu, T.; Xu, W.; Yabsley, B.; Yacoob, S.; Yamaguchi, D.; Yamaguchi, Y.; Yamamoto, A.; Yamamoto, S.; Yamanaka, T.; Yamane, F.; Yamatani, M.; Yamazaki, T.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, H.; Yang, Y.; Yang, Z.; Yao, W.-M.; Yap, Y. C.; Yasu, Y.; Yatsenko, E.; Yau Wong, K. H.; Ye, J.; Ye, S.; Yeletskikh, I.; Yigitbasi, E.; Yildirim, E.; Yorita, K.; Yoshihara, K.; Young, C.; Young, C. J. S.; Yu, J.; Yu, J.; Yuen, S. P. Y.; Yusuff, I.; Zabinski, B.; Zacharis, G.; Zaidan, R.; Zaitsev, A. M.; Zakharchuk, N.; Zalieckas, J.; Zaman, A.; Zambito, S.; Zanzi, D.; Zeitnitz, C.; Zemaityte, G.; Zemla, A.; Zeng, J. C.; Zeng, Q.; Zenin, O.; Ženiš, T.; Zerwas, D.; Zhang, D.; Zhang, D.; Zhang, F.; Zhang, G.; Zhang, H.; Zhang, J.; Zhang, L.; Zhang, L.; Zhang, M.; Zhang, P.; Zhang, R.; Zhang, R.; Zhang, X.; Zhang, Y.; Zhang, Z.; Zhao, X.; Zhao, Y.; Zhao, Z.; Zhemchugov, A.; Zhou, B.; Zhou, C.; Zhou, L.; Zhou, M.; Zhou, M.; Zhou, N.; Zhou, Y.; Zhu, C. G.; Zhu, H.; Zhu, J.; Zhu, Y.; Zhuang, X.; Zhukov, K.; Zibell, A.; Zieminska, D.; Zimine, N. I.; Zimmermann, C.; Zimmermann, S.; Zinonos, Z.; Zinser, M.; Ziolkowski, M.; Živković, L.; Zobernig, G.; Zoccoli, A.; Zou, R.; zur Nedden, M.; Zwalinski, L.

    2018-01-01

    Results of a search for new phenomena in final states with an energetic jet and large missing transverse momentum are reported. The search uses proton-proton collision data corresponding to an integrated luminosity of 36.1 fb-1 at a centre-of-mass energy of 13 TeV collected in 2015 and 2016 with the ATLAS detector at the Large Hadron Collider. Events are required to have at least one jet with a transverse momentum above 250 GeV and no leptons ( e or μ). Several signal regions are considered with increasing requirements on the missing transverse momentum above 250 GeV. Good agreement is observed between the number of events in data and Standard Model predictions. The results are translated into exclusion limits in models with pair-produced weakly interacting dark-matter candidates, large extra spatial dimensions, and supersymmetric particles in several compressed scenarios. [Figure not available: see fulltext.

  14. Search for dark matter and other new phenomena in events with an energetic jet and large missing transverse momentum using the ATLAS detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aaboud, M.; Aad, G.; Abbott, B.

    Results of a search for new phenomena in final states with an energetic jet and large missing transverse momentum are reported. The search uses proton-proton collision data corresponding to an integrated luminosity of 36.1 fb -1 at a centre-of-mass energy of 13 TeV collected in 2015 and 2016 with the ATLAS detector at the Large Hadron Collider. Events are required to have at least one jet with a transverse momentum above 250 GeV and no leptons (e or μ). Several signal regions are considered with increasing requirements on the missing transverse momentum above 250 GeV. Good agreement is observed betweenmore » the number of events in data and Standard Model predictions. In conclusion, the results are translated into exclusion limits in models with pair-produced weakly interacting dark-matter candidates, large extra spatial dimensions, and supersymmetric particles in several compressed scenarios.« less

  15. Search for dark matter and other new phenomena in events with an energetic jet and large missing transverse momentum using the ATLAS detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aaboud, M.; Aad, G.; Abbott, B.

    Results of a search for new phenomena in final states with an energetic jet and large missing transverse momentum are reported. The search uses proton-proton collision data corresponding to an integrated luminosity of 36.1 fb –1 at a centre-of-mass energy of 13 TeV collected in 2015 and 2016 with the ATLAS detector at the Large Hadron Collider. Events are required to have at least one jet with a transverse momentum above 250 GeV and no leptons (e or μ). Several signal regions are considered with increasing requirements on the missing transverse momentum above 250 GeV. Good agreement is observed betweenmore » the number of events in data and Standard Model predictions. The results are translated into exclusion limits in models with pair-produced weakly interacting dark-matter candidates, large extra spatial dimensions, and supersymmetric particles in several compressed scenarios.« less

  16. Search for dark matter and other new phenomena in events with an energetic jet and large missing transverse momentum using the ATLAS detector

    DOE PAGES

    Aaboud, M.; Aad, G.; Abbott, B.; ...

    2018-01-25

    Results of a search for new phenomena in final states with an energetic jet and large missing transverse momentum are reported. The search uses proton-proton collision data corresponding to an integrated luminosity of 36.1 fb –1 at a centre-of-mass energy of 13 TeV collected in 2015 and 2016 with the ATLAS detector at the Large Hadron Collider. Events are required to have at least one jet with a transverse momentum above 250 GeV and no leptons (e or μ). Several signal regions are considered with increasing requirements on the missing transverse momentum above 250 GeV. Good agreement is observed betweenmore » the number of events in data and Standard Model predictions. The results are translated into exclusion limits in models with pair-produced weakly interacting dark-matter candidates, large extra spatial dimensions, and supersymmetric particles in several compressed scenarios.« less

  17. Search for dark matter and other new phenomena in events with an energetic jet and large missing transverse momentum using the ATLAS detector

    DOE PAGES

    Aaboud, M.; Aad, G.; Abbott, B.; ...

    2018-01-25

    Results of a search for new phenomena in final states with an energetic jet and large missing transverse momentum are reported. The search uses proton-proton collision data corresponding to an integrated luminosity of 36.1 fb -1 at a centre-of-mass energy of 13 TeV collected in 2015 and 2016 with the ATLAS detector at the Large Hadron Collider. Events are required to have at least one jet with a transverse momentum above 250 GeV and no leptons (e or μ). Several signal regions are considered with increasing requirements on the missing transverse momentum above 250 GeV. Good agreement is observed betweenmore » the number of events in data and Standard Model predictions. In conclusion, the results are translated into exclusion limits in models with pair-produced weakly interacting dark-matter candidates, large extra spatial dimensions, and supersymmetric particles in several compressed scenarios.« less

  18. Square Kilometre Array Science Data Processing

    NASA Astrophysics Data System (ADS)

    Nikolic, Bojan; SDP Consortium, SKA

    2014-04-01

    The Square Kilometre Array (SKA) is planned to be, by a large factor, the largest and most sensitive radio telescope ever constructed. The first phase of the telescope (SKA1), now in the design phase, will in itself represent a major leap in capabilities compared to current facilities. These advances are to a large extent being made possible by advances in available computer processing power so that that larger numbers of smaller, simpler and cheaper receptors can be used. As a result of greater reliance and demands on computing, ICT is becoming an ever more integral part of the telescope. The Science Data Processor is the part of the SKA system responsible for imaging, calibration, pulsar timing, confirmation of pulsar candidates, derivation of some further derived data products, archiving and providing the data to the users. It will accept visibilities at data rates at several TB/s and require processing power for imaging in range 100 petaFLOPS -- ~1 ExaFLOPS, putting SKA1 into the regime of exascale radio astronomy. In my talk I will present the overall SKA system requirements and how they drive these high data throughput and processing requirements. Some of the key challenges for the design of SDP are: - Identifying sufficient parallelism to utilise very large numbers of separate compute cores that will be required to provide exascale computing throughput - Managing efficiently the high internal data flow rates - A conceptual architecture and software engineering approach that will allow adaptation of the algorithms as we learn about the telescope and the atmosphere during the commissioning and operational phases - System management that will deal gracefully with (inevitably frequent) failures of individual units of the processing system In my talk I will present possible initial architectures for the SDP system that attempt to address these and other challenges.

  19. GA-optimization for rapid prototype system demonstration

    NASA Technical Reports Server (NTRS)

    Kim, Jinwoo; Zeigler, Bernard P.

    1994-01-01

    An application of the Genetic Algorithm (GA) is discussed. A novel scheme of Hierarchical GA was developed to solve complicated engineering problems which require optimization of a large number of parameters with high precision. High level GAs search for few parameters which are much more sensitive to the system performance. Low level GAs search in more detail and employ a greater number of parameters for further optimization. Therefore, the complexity of the search is decreased and the computing resources are used more efficiently.

  20. The Bionomics and Vector Competence of Anopheles Albimanus and Anopheles Vestitipennis in Southern Belize, Central America

    DTIC Science & Technology

    2000-11-20

    can be found in large numbers throughout the Yucatan , southern Mexico and Guatemala (Kumm et at. 1943, Loyola et a1. 1991, Arredondo-Jimenez et a1...material or detritus as a nutritional source as well as plant cover for shade. Anopheles yestitipennis exhibits its highest numbers during the rainy...species appears to require plant material or detritus as a nutritional source as well as plant cover for shade. Both species also have a clear seasonal

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petach, Trevor A.; Reich, Konstantin V.; Zhang, Xiao

    Ionic liquid gating has a number of advantages over solid-state gating, especially for flexible or transparent devices and for applications requiring high carrier densities. But, the large number of charged ions near the channel inevitably results in Coulomb scattering, which limits the carrier mobility in otherwise clean systems. We develop a model for this Coulomb scattering. We then validate our model experimentally using ionic liquid gating of graphene across varying thicknesses of hexagonal boron nitride, demonstrating that disorder in the bulk ionic liquid often dominates the scattering.

  2. Monocrystalline silicon and the meta-shell approach to building x-ray astronomical optics

    NASA Astrophysics Data System (ADS)

    Zhang, William W.; Allgood, Kim D.; Biskach, Michael P.; Chan, Kai-Wing; Hlinka, Michal; Kearney, John D.; Mazzarella, James R.; McClelland, Ryan S.; Numata, Ai; Olsen, Lawrence G.; Riveros, Raul E.; Saha, Timo T.; Solly, Peter M.

    2017-08-01

    Angular resolution and photon-collecting area are the two most important factors that determine the power of an X-ray astronomical telescope. The grazing incidence nature of X-ray optics means that even a modest photon-collecting area requires an extraordinarily large mirror area. This requirement for a large mirror area is compounded by the fact that X-ray telescopes must be launched into, and operated in, outer space, which means that the mirror must be both lightweight and thin. Meanwhile the production and integration cost of a large mirror area determines the economical feasibility of a telescope. In this paper we report on a technology development program whose objective is to meet this three-fold requirement of making astronomical X-ray optics: (1) angular resolution, (2) photon-collecting area, and (3) production cost. This technology is based on precision polishing of monocrystalline silicon for making a large number of mirror segments and on the metashell approach to integrate these mirror segments into a mirror assembly. The meta-shell approach takes advantage of the axial or rotational symmetry of an X-ray telescope to align and bond a large number of small, lightweight mirrors into a large mirror assembly. The most important features of this technology include: (1) potential to achieve the highest possible angular resolution dictated by optical design and diffraction; and (2) capable of implementing every conceivable optical design, such as Wolter-I, WolterSchwarzschild, as well as other variations to one or another aspect of a telescope. The simplicity and modular nature of the process makes it highly amenable to mass production, thereby making it possible to produce very large X-ray telescopes in a reasonable amount of time and at a reasonable cost. As of June 2017, the basic validity of this approach has been demonstrated by finite element analysis of its structural, thermal, and gravity release characteristics, and by the fabrication, alignment, bonding, and X-ray testing of mirror modules. Continued work in the coming years will raise the technical readiness of this technology for use by SMEX, MIDEX, Probe, as well as major flagship missions.

  3. Monocrystalline Silicon and the Meta-Shell Approach to Building X-Ray Astronomical Optics

    NASA Technical Reports Server (NTRS)

    Zhang, William W.; Allgood, Kim D.; Biskach, Michael P.; Chan, Kai-Wing; Hlinka, Michal; Kearney, John D.; Mazzarella, James R.; McClelland, Ryan S.; Numata, Ai; Olsen, Lawrence G.; hide

    2017-01-01

    Angular resolution and photon-collecting area are the two most important factors that determine the power of an X-ray astronomical telescope. The grazing incidence nature of X-ray optics means that even a modest photon-collecting area requires an extraordinarily large mirror area. This requirement for a large mirror area is compounded by the fact that X-ray telescopes must be launched into, and operated in, outer space, which means that the mirror must be both lightweight and thin. Meanwhile the production and integration cost of a large mirror area determines the economical feasibility of a telescope. In this paper we report on a technology development program whose objective is to meet this three-fold requirement of making astronomical X-ray optics: (1) angular resolution, (2) photon-collecting area, and (3) production cost. This technology is based on precision polishing of monocrystalline silicon for making a large number of mirror segments and on the meta-shell approach to integrate these mirror segments into a mirror assembly. The meta-shell approach takes advantage of the axial or rotational symmetry of an X-ray telescope to align and bond a large number of small, lightweight mirrors into a large mirror assembly. The most important features of this technology include: (1) potential to achieve the highest possible angular resolution dictated by optical design and diffraction; and (2) capable of implementing every conceivable optical design, such as Wolter-I, Wolter-Schwarzschild, as well as other variations to one or another aspect of a telescope. The simplicity and modular nature of the process makes it highly amenable to mass production, thereby making it possible to produce very large X-ray telescopes in a reasonable amount of time and at a reasonable cost. As of June 2017, the basic validity of this approach has been demonstrated by finite element analysis of its structural, thermal, and gravity release characteristics, and by the fabrication, alignment, bonding, and X-ray testing of mirror modules. Continued work in the coming years will raise the technical readiness of this technology for use by SMEX, MIDEX, Probe, as well as major flagship missions.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aaboud, M.; Aad, G.; Abbott, B.

    We report results of a search for new phenomena in final states with an energetic jet and large missing transverse momentum. The search uses proton-proton collision data corresponding to an integrated luminosity of 3.2 fb more » $-$1 at $$\\sqrt{s}$$ = 13 TeV collected in 2015 with the ATLAS detector at the Large Hadron Collider. Events are required to have at least one jet with a transverse momentum above 250 GeV and no leptons. Several signal regions are considered with increasing missing-transverse-momentum requirements between E$$miss\\atop{T}$$ > 250 GeV and E$$miss\\atop{T}$$ > 700 GeV . Good agreement is observed between the number of events in data and Standard Model predictions. The results are translated into exclusion limits in models with large extra spatial dimensions, pair production of weakly interacting dark-matter candidates, and the production of supersymmetric particles in several compressed scenarios.« less

  5. Multiscale numerical simulations of magnetoconvection at low magnetic Prandtl and Rossby numbers.

    NASA Astrophysics Data System (ADS)

    Maffei, S.; Calkins, M. A.; Julien, K. A.; Marti, P.

    2017-12-01

    The dynamics of the Earth's outer core is characterized by low values of the Rossby (Ro), Ekman and magnetic Prandtl numbers. These values indicate the large spectra of temporal and spatial scales that need to be accounted for in realistic numerical simulations of the system. Current direct numerical simulation are not capable of reaching this extreme regime, suggesting that a new class of models is required to account for the rich dynamics expected in the natural system. Here we present results from a quasi-geostrophic, multiscale model based on the scale separation implied by the low Ro typical of rapidly rotating systems. We investigate a plane layer geometry where convection is driven by an imposed temperature gradient and the hydrodynamic equations are modified by a large scale magnetic field. Analytical investigation shows that at values of thermal and magnetic Prandtl numbers relevant for liquid metals, the energetic requirements for the onset of convection is not significantly altered even in the presence of strong magnetic fields. Results from strongly forced nonlinear numerical simulations show the presence of an inverse cascade, typical of 2-D turbulence, when no or weak magnetic field is applied. For higher values of the magnetic field the inverse cascade is quenched.

  6. Creating order from chaos: part I: triage, initial care, and tactical considerations in mass casualty and disaster response.

    PubMed

    Baker, Michael S

    2007-03-01

    How do we train for the entire spectrum of potential emergency and crisis scenarios? Will we suddenly face large numbers of combat casualties, an earthquake, a plane crash, an industrial explosion, or a terrorist bombing? The daily routine can suddenly be complicated by large numbers of patients, exceeding the ability to treat in a routine fashion. Disaster events can result in patients with penetrating wounds, burns, blast injuries, chemical contamination, or all of these at once. Some events may disrupt infrastructure or result in loss of essential equipment or key personnel. The chaos of a catastrophic event impedes decision-making and effective treatment of patients. Disasters require a paradigm shift from the application of unlimited resources for the greatest good of each individual patient to the allocation of care, with limited resources, for the greatest good for the greatest number of patients. Training and preparation are essential to remain effective during crises and major catastrophic events. Disaster triage and crisis management represent a tactical art that incorporates clinical skills, didactic information, communication ability, leadership, and decision-making. Planning, rehearsing, and exercising various scenarios encourage the flexibility, adaptability, and innovation required in disaster settings. These skills can bring order to the chaos of overwhelming disaster events.

  7. A highly efficient bead extraction technique with low bead number for digital microfluidic immunoassay

    PubMed Central

    Tsai, Po-Yen; Lee, I-Chin; Hsu, Hsin-Yun; Huang, Hong-Yuan; Fan, Shih-Kang; Liu, Cheng-Hsien

    2016-01-01

    Here, we describe a technique to manipulate a low number of beads to achieve high washing efficiency with zero bead loss in the washing process of a digital microfluidic (DMF) immunoassay. Previously, two magnetic bead extraction methods were reported in the DMF platform: (1) single-side electrowetting method and (2) double-side electrowetting method. The first approach could provide high washing efficiency, but it required a large number of beads. The second approach could reduce the required number of beads, but it was inefficient where multiple washes were required. More importantly, bead loss during the washing process was unavoidable in both methods. Here, an improved double-side electrowetting method is proposed for bead extraction by utilizing a series of unequal electrodes. It is shown that, with proper electrode size ratio, only one wash step is required to achieve 98% washing rate without any bead loss at bead number less than 100 in a droplet. It allows using only about 25 magnetic beads in DMF immunoassay to increase the number of captured analytes on each bead effectively. In our human soluble tumor necrosis factor receptor I (sTNF-RI) model immunoassay, the experimental results show that, comparing to our previous results without using the proposed bead extraction technique, the immunoassay with low bead number significantly enhances the fluorescence signal to provide a better limit of detection (3.14 pg/ml) with smaller reagent volumes (200 nl) and shorter analysis time (<1 h). This improved bead extraction technique not only can be used in the DMF immunoassay but also has great potential to be used in any other bead-based DMF systems for different applications. PMID:26858807

  8. Front End for a neutrino factory or muon collider

    DOE PAGES

    Neuffer, David; Snopok, Pavel; Alexahin, Yuri

    2017-11-30

    A neutrino factory or muon collider requires the capture and cooling of a large number of muons. Scenarios for capture, bunching, phase-energy rotation and initial cooling of μ’s produced from a proton source target have been developed, initially for neutrino factory scenarios. They require a drift section from the target, a bunching section and a Φ-δE rotation section leading into the cooling channel. Important concerns are rf limitations within the focusing magnetic fields and large losses in the transport. The currently preferred cooling channel design is an “HFOFO Snake” configuration that cools both μ + and μ - transversely andmore » longitudinally. Finally, the status of the design is presented and variations are discussed.« less

  9. Software Engineering for Scientific Computer Simulations

    NASA Astrophysics Data System (ADS)

    Post, Douglass E.; Henderson, Dale B.; Kendall, Richard P.; Whitney, Earl M.

    2004-11-01

    Computer simulation is becoming a very powerful tool for analyzing and predicting the performance of fusion experiments. Simulation efforts are evolving from including only a few effects to many effects, from small teams with a few people to large teams, and from workstations and small processor count parallel computers to massively parallel platforms. Successfully making this transition requires attention to software engineering issues. We report on the conclusions drawn from a number of case studies of large scale scientific computing projects within DOE, academia and the DoD. The major lessons learned include attention to sound project management including setting reasonable and achievable requirements, building a good code team, enforcing customer focus, carrying out verification and validation and selecting the optimum computational mathematics approaches.

  10. Cancer prevention clinical trials.

    PubMed

    Nixon, D W

    1994-01-01

    Many kinds of cancer are preventable. Avoidance of tobacco would essentially eliminate lung cancer and most head and neck cancers as well. Other common cancers (breast, colon, prostate) are related to diet and therefore may also be preventable, at least in part. Abundant epidemiologic and laboratory data link specific nutrients including fat, fiber and vitamins to cancer so that appropriate manipulation of these constituents might reduce cancer risk. Determination of appropriate manipulations requires prospective clinical trials in humans. Approximately 40 such trials are in progress. Some have been completed with encouraging results. Future large scale trials will require designs that overcome the barriers of cost, large subject numbers and long study duration. The use of "intermediate markers" rather than cancer end points is a strategy that will help overcome these barriers.

  11. Design and development of a quad copter (UMAASK) using CAD/CAM/CAE

    NASA Astrophysics Data System (ADS)

    Manarvi, Irfan Anjum; Aqib, Muhammad; Ajmal, Muhammad; Usman, Muhammad; Khurshid, Saqib; Sikandar, Usman

    Micro flying vehicles1 (MFV) have become a popular area of research due to economy of production, flexibility of launch and variety of applications. A large number of techniques from pencil sketching to computer based software are being used for designing specific geometries and selection of materials to arrive at novel designs for specific requirements. Present research was focused on development of suitable design configuration using CAD/CAM/CAE tools and techniques. A number of designs were reviewed for this purpose. Finally, rotary wing Quadcopter flying vehicle design was considered appropriate for this research. Performance requirements were planned as approximately 10 meters ceiling, weight less than 500grams and ability to take videos and pictures. Parts were designed using Finite Element Analysis, manufactured using CNC machines and assembled to arrive at final design named as UMAASK. Flight tests were carried out which confirmed the design requirements.

  12. Supporting scalability and flexibility in a distributed management platform

    NASA Astrophysics Data System (ADS)

    Jardin, P.

    1996-06-01

    The TeMIP management platform was developed to manage very large distributed systems such as telecommunications networks. The management of these networks imposes a number of fairly stringent requirements including the partitioning of the network, division of work based on skills and target system types and the ability to adjust the functions to specific operational requirements. This requires the ability to cluster managed resources into domains that are totally defined at runtime based on operator policies. This paper addresses some of the issues that must be addressed in order to add a dynamic dimension to a management solution.

  13. RabbitQR: fast and flexible big data processing at LSST data rates using existing, shared-use hardware

    NASA Astrophysics Data System (ADS)

    Kotulla, Ralf; Gopu, Arvind; Hayashi, Soichi

    2016-08-01

    Processing astronomical data to science readiness was and remains a challenge, in particular in the case of multi detector instruments such as wide-field imagers. One such instrument, the WIYN One Degree Imager, is available to the astronomical community at large, and, in order to be scientifically useful to its varied user community on a short timescale, provides its users fully calibrated data in addition to the underlying raw data. However, time-efficient re-processing of the often large datasets with improved calibration data and/or software requires more than just a large number of CPU-cores and disk space. This is particularly relevant if all computing resources are general purpose and shared with a large number of users in a typical university setup. Our approach to address this challenge is a flexible framework, combining the best of both high performance (large number of nodes, internal communication) and high throughput (flexible/variable number of nodes, no dedicated hardware) computing. Based on the Advanced Message Queuing Protocol, we a developed a Server-Manager- Worker framework. In addition to the server directing the work flow and the worker executing the actual work, the manager maintains a list of available worker, adds and/or removes individual workers from the worker pool, and re-assigns worker to different tasks. This provides the flexibility of optimizing the worker pool to the current task and workload, improves load balancing, and makes the most efficient use of the available resources. We present performance benchmarks and scaling tests, showing that, today and using existing, commodity shared- use hardware we can process data with data throughputs (including data reduction and calibration) approaching that expected in the early 2020s for future observatories such as the Large Synoptic Survey Telescope.

  14. Efficient and Robust Data Collection Using Compact Micro Hardware, Distributed Bus Architectures and Optimizing Software

    NASA Technical Reports Server (NTRS)

    Chau, Savio; Vatan, Farrokh; Randolph, Vincent; Baroth, Edmund C.

    2006-01-01

    Future In-Space propulsion systems for exploration programs will invariably require data collection from a large number of sensors. Consider the sensors needed for monitoring several vehicle systems states of health, including the collection of structural health data, over a large area. This would include the fuel tanks, habitat structure, and science containment of systems required for Lunar, Mars, or deep space exploration. Such a system would consist of several hundred or even thousands of sensors. Conventional avionics system design will require these sensors to be connected to a few Remote Health Units (RHU), which are connected to robust, micro flight computers through a serial bus. This results in a large mass of cabling and unacceptable weight. This paper first gives a survey of several techniques that may reduce the cabling mass for sensors. These techniques can be categorized into four classes: power line communication, serial sensor buses, compound serial buses, and wireless network. The power line communication approach uses the power line to carry both power and data, so that the conventional data lines can be eliminated. The serial sensor bus approach reduces most of the cabling by connecting all the sensors with a single (or redundant) serial bus. Many standard buses for industrial control and sensor buses can support several hundreds of nodes, however, have not been space qualified. Conventional avionics serial buses such as the Mil-Std-1553B bus and IEEE 1394a are space qualified but can support only a limited number of nodes. The third approach is to combine avionics buses to increase their addressability. The reliability, EMI/EMC, and flight qualification issues of wireless networks have to be addressed. Several wireless networks such as the IEEE 802.11 and Ultra Wide Band are surveyed in this paper. The placement of sensors can also affect cable mass. Excessive sensors increase the number of cables unnecessarily. Insufficient number of sensors may not provide adequate coverage of the system. This paper also discusses an optimal technique to place and validate sensors.

  15. How to Effectively Use Bismuth Quadruple Therapy: The Good, the Bad, and the Ugly

    PubMed Central

    Graham, David Y.; Lee, Sun-Young

    2015-01-01

    Bismuth triple therapy was the first truly effective Helicobacter pylori eradication therapy. The addition of a proton pump inhibitor largely overcame the problem of metronidazole resistance. Resistance to its being the primary first line therapy have centered on convenience (the large number of tablets required) and side effects causing difficulties with patient adherence. Understanding why the regimen is less successful in some regions remains unexplained in part because of the lack of studies including susceptibility testing. A number of modifications have been proposed such as twice-a-day therapy which addresses both major criticism but the studies with susceptibility testing required to prove its effectiveness in high metronidazole resistance areas are lacking. Most publications lack the data required to understand why they were successful or failed (e.g., detailed resistance and adherence data) and are therefore of little value. We discuss and provide recommendations regarding variations including substitution of doxycycline, amoxicillin, and twice a day therapy. We describe what is known and unknown and provide suggestions regarding what is needed to rationally and effectively use bismuth quadruple therapy. Its primary use is when penicillin cannot be used or when clarithromycin and metronidazole resistance is common. Durations of therapy less than 14 days are not recommended. PMID:26314667

  16. ACHIEVING THE REQUIRED COOLANT FLOW DISTRIBUTION FOR THE ACCELERATOR PRODUCTION OF TRITIUM (APT) TUNGSTEN NEUTRON SOURCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D. SIEBE; K. PASAMEHMETOGLU

    The Accelerator Production of Tritium neutron source consists of clad tungsten targets, which are concentric cylinders with a center rod. These targets are arranged in a matrix of tubes, producing a large number of parallel coolant paths. The coolant flow required to meet thermal-hydraulic design criteria varies with location. This paper describes the work performed to ensure an adequate coolant flow for each target for normal operation and residual heat-removal conditions.

  17. A comparison of quality and utilization problems in large and small group practices.

    PubMed

    Gleason, S C; Richards, M J; Quinnell, J E

    1995-12-01

    Physicians practicing in large, multispecialty medical groups share an organizational culture that differs from that of physicians in small or independent practices. Since 1980, there has been a sharp increase in the size of multispecialty group practice organizations, in part because of increased efficiencies of large group practices. The greater number of physicians and support personnel in a large group practice also requires a relatively more sophisticated management structure. The efficiencies, conveniences, and management structure of a large group practice provide an optimal environment to practice medicine. However, a search of the literature found no data linking a large group practice environment to practice outcomes. The purpose of the study reported in this article was to determine if physicians in large practices have fewer quality and utilization problems than physicians in small or independent practices.

  18. The Iowa Model for Pediatric Low Vision Services.

    ERIC Educational Resources Information Center

    Wilkinson, Mark E.; Stewart, Ian; Trantham, Carole S.

    2000-01-01

    This article describes the evolution of Iowa's model of low vision care for students with visual impairments. It reviews the benefits of a transdisciplinary team approach to providing low vision services for children with visual impairments, including a decrease in the number of students requiring large-print materials and related costs. (Contains…

  19. Optimize Resources and Help Reduce Cost of Ownership with Dell[TM] Systems Management

    ERIC Educational Resources Information Center

    Technology & Learning, 2008

    2008-01-01

    Maintaining secure, convenient administration of the PC system environment can be a significant drain on resources. Deskside visits can greatly increase the cost of supporting a large number of computers. Even simple tasks, such as tracking inventory or updating software, quickly become expensive when they require physically visiting every…

  20. The Long Duration Exposure Facility (LDEF). Mission 1 Experiments.

    ERIC Educational Resources Information Center

    Clark, Lenwood G., Ed.; And Others

    The Long Duration Exposure Facility (LDEF) has been designed to take advantage of the two-way transportation capability of the space shuttle by providing a large number of economical opportunities for science and technology experiments that require modest electrical power and data processing while in space and which benefit from postflight…

  1. Family Child Care Licensing Study, 1998.

    ERIC Educational Resources Information Center

    Children's Foundation, Washington, DC.

    This report details a survey of state child care regulatory agencies. Data on both small family child care homes (FCCH) and group or large family child care homes (LCCH or GCCH) are included and organized into 22 categories: (1) number of regulated homes; (2) definitions and regulatory requirements; (3) unannounced inspection procedure; (4)…

  2. Transportation infrastructure asset damage cost recovery correlated with shale oil/gas recovery operations in Louisiana : research project capsule : technology transfer program.

    DOT National Transportation Integrated Search

    2016-10-01

    Due to shale oil/gas recovery : operations, a large number : of truck trips on Louisiana : roadways are required for : transporting equipment and : materials to and from the : recovery sites. As a result, : roads and bridges that were : designed for ...

  3. Treatment of Ion-Atom Collisions Using a Partial-Wave Expansion of the Projectile Wavefunction

    ERIC Educational Resources Information Center

    Wong, T. G.; Foster, M.; Colgan, J.; Madison, D. H.

    2009-01-01

    We present calculations of ion-atom collisions using a partial-wave expansion of the projectile wavefunction. Most calculations of ion-atom collisions have typically used classical or plane-wave approximations for the projectile wavefunction, since partial-wave expansions are expected to require prohibitively large numbers of terms to converge…

  4. Transformative Inquiry in Teacher Education: Evoking the Soul of What Matters

    ERIC Educational Resources Information Center

    Tanaka, Michele T. D.

    2015-01-01

    Teaching requires the navigation of an intricate terrain of complex and often overlapping issues, many of which extend beyond the classroom setting. Teachers are uniquely placed to influence large numbers of learners beyond the delivery of prescribed curriculum, and therefore need to be particularly careful and aware of their professional ways of…

  5. Computer-Based Assessment of Complex Problem Solving: Concept, Implementation, and Application

    ERIC Educational Resources Information Center

    Greiff, Samuel; Wustenberg, Sascha; Holt, Daniel V.; Goldhammer, Frank; Funke, Joachim

    2013-01-01

    Complex Problem Solving (CPS) skills are essential to successfully deal with environments that change dynamically and involve a large number of interconnected and partially unknown causal influences. The increasing importance of such skills in the 21st century requires appropriate assessment and intervention methods, which in turn rely on adequate…

  6. 75 FR 77955 - Government Securities: Call for Large Position Reports

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-14

    ... Liberty Street, New York, New York 10045; or faxed to 212-720-5030. FOR FURTHER INFORMATION CONTACT: Lori... must include the required positions and administrative information. The reports may be faxed to (212... September 2013, Series AC-2013, have a CUSIP number of 912828 NY 2, a STRIPS principal component CUSIP...

  7. How Can Intercultural School Development Succeed? The Perspective of Teachers and Teacher Educators

    ERIC Educational Resources Information Center

    Kiel, Ewald; Syring, Marcus; Weiss, Sabine

    2017-01-01

    The large number of newly arrived individuals from other countries, particularly of young people, has had an enormous impact on the school system in Germany. The present study investigated requirements for successful intercultural school development. The study used investigative group discussions, where the groups were composed of teachers and…

  8. Projecting Enrollment in Rural Schools: A Study of Three Vermont School Districts

    ERIC Educational Resources Information Center

    Grip, Richard S.

    2004-01-01

    Large numbers of rural districts have experienced sharp declines in enrollment, unlike their suburban counterparts. Accurate enrollment projections are required, whether a district needs to build new schools or consolidate existing ones. For school districts having more than 600 students, a quantitative method such as the Cohort-Survival Ratio…

  9. Affective Experiences of International and Home Students during the Information Search Process

    ERIC Educational Resources Information Center

    Haley, Adele Nicole; Clough, Paul

    2017-01-01

    An increasing number of students are studying abroad requiring that they interact with information in languages other than their mother tongue. The UK in particular has seen a large growth in international students within Higher Education. These nonnative English speaking students present a distinct user group for university information services,…

  10. Genetic variance partitioning and genome-wide prediction with allele dosage information in autotetraploid potato

    USDA-ARS?s Scientific Manuscript database

    Potato breeding cycles typically last 6-7 years because of the modest seed multiplication rate and large number of traits required of new varieties. Genomic selection has the potential to increase genetic gain per unit of time, through higher accuracy and/or a shorter cycle. Both possibilities were ...

  11. General William Slim and the Power of Emotional and Cultural Intelligence in Multinational and Multicultural Operations

    DTIC Science & Technology

    2015-06-12

    including Sikhs, Punjabis , Rajputs, Gurkhas, and Jats to name only a few. Due to its multi-ethnic nature, British officers were required to adapt to the...including Jats, Dogras, Sikhs, Pathans, Rajputs and Punjabi Mussalmen, although a large number of mixed regiments did exist. The British regimental...

  12. Striatal Degeneration Impairs Language Learning: Evidence from Huntington's Disease

    ERIC Educational Resources Information Center

    De Diego-Balaguer, R.; Couette, M.; Dolbeau, G.; Durr, A.; Youssov, K.; Bachoud-Levi, A.-C.

    2008-01-01

    Although the role of the striatum in language processing is still largely unclear, a number of recent proposals have outlined its specific contribution. Different studies report evidence converging to a picture where the striatum may be involved in those aspects of rule-application requiring non-automatized behaviour. This is the main…

  13. Effects of Camera Arrangement on Perceptual-Motor Performance in Minimally Invasive Surgery

    ERIC Educational Resources Information Center

    Delucia, Patricia R.; Griswold, John A.

    2011-01-01

    Minimally invasive surgery (MIS) is performed for a growing number of treatments. Whereas open surgery requires large incisions, MIS relies on small incisions through which instruments are inserted and tissues are visualized with a camera. MIS results in benefits for patients compared with open surgery, but degrades the surgeon's perceptual-motor…

  14. 76 FR 35169 - Validation of Merchant Mariners' Vital Information and Issuance of Coast Guard Merchant Mariner's...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-16

    ... both temporary and permanent. Locations such as post offices can take secure fingerprints for purposes... large number of applicants might require fingerprints and identification checks, and that the Coast... interim rule and work with industry to come up with a better system for fingerprint and identification...

  15. Computational procedure for finite difference solution of one-dimensional heat conduction problems reduces computer time

    NASA Technical Reports Server (NTRS)

    Iida, H. T.

    1966-01-01

    Computational procedure reduces the numerical effort whenever the method of finite differences is used to solve ablation problems for which the surface recession is large relative to the initial slab thickness. The number of numerical operations required for a given maximum space mesh size is reduced.

  16. Single nucleotide polymorphisms generated by genotyping by sequencing to characterize genome-wide diversity, linkage disequilibrium, and selective sweeps in cultivated watermelon

    USDA-ARS?s Scientific Manuscript database

    Large datasets containing single nucleotide polymorphisms (SNPs) are used to analyze genome-wide diversity in a robust collection of cultivars from representative accessions, across the world. The extent of linkage disequilibrium (LD) within a population determines the number of markers required fo...

  17. Application of molecular target homology-based approaches to predict species sensitivities to two pesticides, permethrin and propiconozole

    EPA Science Inventory

    In the U.S., registration of pesticide active ingredients requires a battery of intensive and costly in vivo toxicity tests which utilize large numbers of test animals. These tests use a limited array of model species from various aquatic and terrestrial taxa to represent all pla...

  18. Commnity Petascale Project for Accelerator Science And Simulation: Advancing Computational Science for Future Accelerators And Accelerator Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spentzouris, Panagiotis; /Fermilab; Cary, John

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.« less

  19. A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing

    PubMed Central

    Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian

    2016-01-01

    Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623

  20. Computational strategies for alternative single-step Bayesian regression models with large numbers of genotyped and non-genotyped animals.

    PubMed

    Fernando, Rohan L; Cheng, Hao; Golden, Bruce L; Garrick, Dorian J

    2016-12-08

    Two types of models have been used for single-step genomic prediction and genome-wide association studies that include phenotypes from both genotyped animals and their non-genotyped relatives. The two types are breeding value models (BVM) that fit breeding values explicitly and marker effects models (MEM) that express the breeding values in terms of the effects of observed or imputed genotypes. MEM can accommodate a wider class of analyses, including variable selection or mixture model analyses. The order of the equations that need to be solved and the inverses required in their construction vary widely, and thus the computational effort required depends upon the size of the pedigree, the number of genotyped animals and the number of loci. We present computational strategies to avoid storing large, dense blocks of the MME that involve imputed genotypes. Furthermore, we present a hybrid model that fits a MEM for animals with observed genotypes and a BVM for those without genotypes. The hybrid model is computationally attractive for pedigree files containing millions of animals with a large proportion of those being genotyped. We demonstrate the practicality on both the original MEM and the hybrid model using real data with 6,179,960 animals in the pedigree with 4,934,101 phenotypes and 31,453 animals genotyped at 40,214 informative loci. To complete a single-trait analysis on a desk-top computer with four graphics cards required about 3 h using the hybrid model to obtain both preconditioned conjugate gradient solutions and 42,000 Markov chain Monte-Carlo (MCMC) samples of breeding values, which allowed making inferences from posterior means, variances and covariances. The MCMC sampling required one quarter of the effort when the hybrid model was used compared to the published MEM. We present a hybrid model that fits a MEM for animals with genotypes and a BVM for those without genotypes. Its practicality and considerable reduction in computing effort was demonstrated. This model can readily be extended to accommodate multiple traits, multiple breeds, maternal effects, and additional random effects such as polygenic residual effects.

  1. How big is too big or how many partners are needed to build a large project which still can be managed successfully?

    NASA Astrophysics Data System (ADS)

    Henkel, Daniela; Eisenhauer, Anton

    2017-04-01

    During the last decades, the number of large research projects has increased and therewith the requirement for multidisciplinary, multisectoral collaboration. Such complex and large-scale projects pose new competencies to form, manage, and use large, diverse teams as a competitive advantage. For complex projects the effort is magnified because multiple large international research consortia involving academic and non-academic partners, including big industries, NGOs, private and public bodies, all with cultural differences, individually discrepant expectations on teamwork and differences in the collaboration between national and multi-national administrations and research organisations, challenge the organisation and management of such multi-partner research consortia. How many partners are needed to establish and conduct collaboration with a multidisciplinary and multisectoral approach? How much personnel effort and what kinds of management techniques are required for such projects. This presentation identifies advantages and challenges of large research projects based on the experiences made in the context of an Innovative Training Network (ITN) project within Marie Skłodowska-Curie Actions of the European HORIZON 2020 program. Possible strategies are discussed to circumvent and avoid conflicts already at the beginning of the project.

  2. A comparison of three methods for estimating the requirements for medical specialists: the case of otolaryngologists.

    PubMed Central

    Anderson, G F; Han, K C; Miller, R H; Johns, M E

    1997-01-01

    OBJECTIVE: To compare three methods of computing the national requirements for otolaryngologists in 1994 and 2010. DATA SOURCES: Three large HMOs, a Delphi panel, the Bureau of Health Professions (BHPr), and published sources. STUDY DESIGN: Three established methods of computing requirements for otolaryngologists were compared: managed care, demand-utilization, and adjusted needs assessment. Under the managed care model, a published method based on reviewing staffing patterns in HMOs was modified to estimate the number of otolaryngologists. We obtained from BHPr estimates of work force projections from their demand model. To estimate the adjusted needs model, we convened a Delphi panel of otolaryngologists using the methodology developed by the Graduate Medical Education National Advisory Committee (GMENAC). DATA COLLECTION/EXTRACTION METHODS: Not applicable. PRINCIPAL FINDINGS: Wide variation in the estimated number of otolaryngologists required occurred across the three methods. Within each model it was possible to alter the requirements for otolaryngologists significantly by changing one or more of the key assumptions. The managed care model has a potential to obtain the most reliable estimates because it reflects actual staffing patterns in institutions that are attempting to use physicians efficiently. CONCLUSIONS: Estimates of work force requirements can vary considerably if one or more assumptions are changed. In order for the managed care approach to be useful for actual decision making concerning the appropriate number of otolaryngologists required, additional research on the methodology used to extrapolate the results to the general population is necessary. PMID:9180613

  3. A comparison of several methods of solving nonlinear regression groundwater flow problems

    USGS Publications Warehouse

    Cooley, Richard L.

    1985-01-01

    Computational efficiency and computer memory requirements for four methods of minimizing functions were compared for four test nonlinear-regression steady state groundwater flow problems. The fastest methods were the Marquardt and quasi-linearization methods, which required almost identical computer times and numbers of iterations; the next fastest was the quasi-Newton method, and last was the Fletcher-Reeves method, which did not converge in 100 iterations for two of the problems. The fastest method per iteration was the Fletcher-Reeves method, and this was followed closely by the quasi-Newton method. The Marquardt and quasi-linearization methods were slower. For all four methods the speed per iteration was directly related to the number of parameters in the model. However, this effect was much more pronounced for the Marquardt and quasi-linearization methods than for the other two. Hence the quasi-Newton (and perhaps Fletcher-Reeves) method might be more efficient than either the Marquardt or quasi-linearization methods if the number of parameters in a particular model were large, although this remains to be proven. The Marquardt method required somewhat less central memory than the quasi-linearization metilod for three of the four problems. For all four problems the quasi-Newton method required roughly two thirds to three quarters of the memory required by the Marquardt method, and the Fletcher-Reeves method required slightly less memory than the quasi-Newton method. Memory requirements were not excessive for any of the four methods.

  4. Empirical gradient threshold technique for automated segmentation across image modalities and cell lines.

    PubMed

    Chalfoun, J; Majurski, M; Peskin, A; Breen, C; Bajcsy, P; Brady, M

    2015-10-01

    New microscopy technologies are enabling image acquisition of terabyte-sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21,000×21,000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user-set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re-adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference data set with a 10-fold cross-validation method. EGT segments cells or colonies with resulting Dice accuracy index measurements above 0.92 for all cross-validation data sets. EGT results has also been visually verified on a much larger data set that includes bright field and Differential Interference Contrast (DIC) images, 16 cell lines and 61 time-sequence data sets, for a total of 17,479 images. This method is implemented as an open-source plugin to ImageJ as well as a standalone executable that can be downloaded from the following link: https://isg.nist.gov/. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  5. Accurate, high-throughput typing of copy number variation using paralogue ratios from dispersed repeats

    PubMed Central

    Armour, John A. L.; Palla, Raquel; Zeeuwen, Patrick L. J. M.; den Heijer, Martin; Schalkwijk, Joost; Hollox, Edward J.

    2007-01-01

    Recent work has demonstrated an unexpected prevalence of copy number variation in the human genome, and has highlighted the part this variation may play in predisposition to common phenotypes. Some important genes vary in number over a high range (e.g. DEFB4, which commonly varies between two and seven copies), and have posed formidable technical challenges for accurate copy number typing, so that there are no simple, cheap, high-throughput approaches suitable for large-scale screening. We have developed a simple comparative PCR method based on dispersed repeat sequences, using a single pair of precisely designed primers to amplify products simultaneously from both test and reference loci, which are subsequently distinguished and quantified via internal sequence differences. We have validated the method for the measurement of copy number at DEFB4 by comparison of results from >800 DNA samples with copy number measurements by MAPH/REDVR, MLPA and array-CGH. The new Paralogue Ratio Test (PRT) method can require as little as 10 ng genomic DNA, appears to be comparable in accuracy to the other methods, and for the first time provides a rapid, simple and inexpensive method for copy number analysis, suitable for application to typing thousands of samples in large case-control association studies. PMID:17175532

  6. Evolving from bioinformatics in-the-small to bioinformatics in-the-large.

    PubMed

    Parker, D Stott; Gorlick, Michael M; Lee, Christopher J

    2003-01-01

    We argue the significance of a fundamental shift in bioinformatics, from in-the-small to in-the-large. Adopting a large-scale perspective is a way to manage the problems endemic to the world of the small-constellations of incompatible tools for which the effort required to assemble an integrated system exceeds the perceived benefit of the integration. Where bioinformatics in-the-small is about data and tools, bioinformatics in-the-large is about metadata and dependencies. Dependencies represent the complexities of large-scale integration, including the requirements and assumptions governing the composition of tools. The popular make utility is a very effective system for defining and maintaining simple dependencies, and it offers a number of insights about the essence of bioinformatics in-the-large. Keeping an in-the-large perspective has been very useful to us in large bioinformatics projects. We give two fairly different examples, and extract lessons from them showing how it has helped. These examples both suggest the benefit of explicitly defining and managing knowledge flows and knowledge maps (which represent metadata regarding types, flows, and dependencies), and also suggest approaches for developing bioinformatics database systems. Generally, we argue that large-scale engineering principles can be successfully adapted from disciplines such as software engineering and data management, and that having an in-the-large perspective will be a key advantage in the next phase of bioinformatics development.

  7. Design rules for quasi-linear nonlinear optical structures

    NASA Astrophysics Data System (ADS)

    Lytel, Richard; Mossman, Sean M.; Kuzyk, Mark G.

    2015-09-01

    The maximization of the intrinsic optical nonlinearities of quantum structures for ultrafast applications requires a spectrum scaling as the square of the energy eigenstate number or faster. This is a necessary condition for an intrinsic response approaching the fundamental limits. A second condition is a design generating eigenstates whose ground and lowest excited state probability densities are spatially separated to produce large differences in dipole moments while maintaining a reasonable spatial overlap to produce large off-diagonal transition moments. A structure whose design meets both conditions will necessarily have large first or second hyperpolarizabilities. These two conditions are fundamental heuristics for the design of any nonlinear optical structure.

  8. FDTD method for laser absorption in metals for large scale problems.

    PubMed

    Deng, Chun; Ki, Hyungson

    2013-10-21

    The FDTD method has been successfully used for many electromagnetic problems, but its application to laser material processing has been limited because even a several-millimeter domain requires a prohibitively large number of grids. In this article, we present a novel FDTD method for simulating large-scale laser beam absorption problems, especially for metals, by enlarging laser wavelength while maintaining the material's reflection characteristics. For validation purposes, the proposed method has been tested with in-house FDTD codes to simulate p-, s-, and circularly polarized 1.06 μm irradiation on Fe and Sn targets, and the simulation results are in good agreement with theoretical predictions.

  9. Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering

    NASA Astrophysics Data System (ADS)

    Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki

    2018-03-01

    We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.

  10. Anthropogenic effects on marine mollusks diversity and abundance; mangrove mollusks along an environmental gradient at Teyab, Persian gulf

    NASA Astrophysics Data System (ADS)

    Azarmanesh, H.; Javanshir, A.

    2009-04-01

    Management of coastal environments requires understanding of ecological relationships among different habitats and their biotas.. The mollusk diversity and density and sedimentological properties of mangrove (Avicennia marina) stands of two different seasons in Teyab have been compared. Pollutant area and cleaner area showed clear separation on the basis of environmental characteristics and benthic mollusks. Numbers of mollusks taxa were generally larger at cleaner sites, and numbers of individuals of several taxa were also larger at other sites. The total number of individuals was not different between the two seasons, largely due to the presence of large numbers of the Mud-living gastropod Cerithium cingulata at the pollutant sites. Differences in the Mollusks were coincident with differences in the nature of the sediment. Sediments in cleaner stands were more compacted and contained lesser organic matter and leaf litter.Analysis of sediment chemistry suggested that mangrove sediment in the Cleaner sites were able to take up more N and P than those in the other sites. Key Words: Sustainable development, Impact, Gastropods, Bivalves, Persian Gulf

  11. Rare Cell Detection by Single-Cell RNA Sequencing as Guided by Single-Molecule RNA FISH.

    PubMed

    Torre, Eduardo; Dueck, Hannah; Shaffer, Sydney; Gospocic, Janko; Gupte, Rohit; Bonasio, Roberto; Kim, Junhyong; Murray, John; Raj, Arjun

    2018-02-28

    Although single-cell RNA sequencing can reliably detect large-scale transcriptional programs, it is unclear whether it accurately captures the behavior of individual genes, especially those that express only in rare cells. Here, we use single-molecule RNA fluorescence in situ hybridization as a gold standard to assess trade-offs in single-cell RNA-sequencing data for detecting rare cell expression variability. We quantified the gene expression distribution for 26 genes that range from ubiquitous to rarely expressed and found that the correspondence between estimates across platforms improved with both transcriptome coverage and increased number of cells analyzed. Further, by characterizing the trade-off between transcriptome coverage and number of cells analyzed, we show that when the number of genes required to answer a given biological question is small, then greater transcriptome coverage is more important than analyzing large numbers of cells. More generally, our report provides guidelines for selecting quality thresholds for single-cell RNA-sequencing experiments aimed at rare cell analyses. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Improved technique that allows the performance of large-scale SNP genotyping on DNA immobilized by FTA technology.

    PubMed

    He, Hongbin; Argiro, Laurent; Dessein, Helia; Chevillard, Christophe

    2007-01-01

    FTA technology is a novel method designed to simplify the collection, shipment, archiving and purification of nucleic acids from a wide variety of biological sources. The number of punches that can normally be obtained from a single specimen card are often however, insufficient for the testing of the large numbers of loci required to identify genetic factors that control human susceptibility or resistance to multifactorial diseases. In this study, we propose an improved technique to perform large-scale SNP genotyping. We applied a whole genome amplification method to amplify DNA from buccal cell samples stabilized using FTA technology. The results show that using the improved technique it is possible to perform up to 15,000 genotypes from one buccal cell sample. Furthermore, the procedure is simple. We consider this improved technique to be a promising methods for performing large-scale SNP genotyping because the FTA technology simplifies the collection, shipment, archiving and purification of DNA, while whole genome amplification of FTA card bound DNA produces sufficient material for the determination of thousands of SNP genotypes.

  13. Geomorphic analysis of large alluvial rivers

    NASA Astrophysics Data System (ADS)

    Thorne, Colin R.

    2002-05-01

    Geomorphic analysis of a large river presents particular challenges and requires a systematic and organised approach because of the spatial scale and system complexity involved. This paper presents a framework and blueprint for geomorphic studies of large rivers developed in the course of basic, strategic and project-related investigations of a number of large rivers. The framework demonstrates the need to begin geomorphic studies early in the pre-feasibility stage of a river project and carry them through to implementation and post-project appraisal. The blueprint breaks down the multi-layered and multi-scaled complexity of a comprehensive geomorphic study into a number of well-defined and semi-independent topics, each of which can be performed separately to produce a clearly defined, deliverable product. Geomorphology increasingly plays a central role in multi-disciplinary river research and the importance of effective quality assurance makes it essential that audit trails and quality checks are hard-wired into study design. The structured approach presented here provides output products and production trails that can be rigorously audited, ensuring that the results of a geomorphic study can stand up to the closest scrutiny.

  14. A Large number of fast cosmological simulations

    NASA Astrophysics Data System (ADS)

    Koda, Jun; Kazin, E.; Blake, C.

    2014-01-01

    Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.

  15. Image segmentation evaluation for very-large datasets

    NASA Astrophysics Data System (ADS)

    Reeves, Anthony P.; Liu, Shuang; Xie, Yiting

    2016-03-01

    With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.

  16. Parallel Calculation of Sensitivity Derivatives for Aircraft Design using Automatic Differentiation

    NASA Technical Reports Server (NTRS)

    Bischof, c. H.; Green, L. L.; Haigler, K. J.; Knauff, T. L., Jr.

    1994-01-01

    Sensitivity derivative (SD) calculation via automatic differentiation (AD) typical of that required for the aerodynamic design of a transport-type aircraft is considered. Two ways of computing SD via code generated by the ADIFOR automatic differentiation tool are compared for efficiency and applicability to problems involving large numbers of design variables. A vector implementation on a Cray Y-MP computer is compared with a coarse-grained parallel implementation on an IBM SP1 computer, employing a Fortran M wrapper. The SD are computed for a swept transport wing in turbulent, transonic flow; the number of geometric design variables varies from 1 to 60 with coupling between a wing grid generation program and a state-of-the-art, 3-D computational fluid dynamics program, both augmented for derivative computation via AD. For a small number of design variables, the Cray Y-MP implementation is much faster. As the number of design variables grows, however, the IBM SP1 becomes an attractive alternative in terms of compute speed, job turnaround time, and total memory available for solutions with large numbers of design variables. The coarse-grained parallel implementation also can be moved easily to a network of workstations.

  17. Addressing Alcohol Use and Problems in Mandated College Students: A Randomized Clinical Trial Using Stepped Care

    ERIC Educational Resources Information Center

    Borsari, Brian; Hustad, John T. P.; Mastroleo, Nadine R.; Tevyaw, Tracy O'Leary; Barnett, Nancy P.; Kahler, Christopher W.; Short, Erica Eaton; Monti, Peter M.

    2012-01-01

    Objective: Over the past 2 decades, colleges and universities have seen a large increase in the number of students referred to the administration for alcohol policies violations. However, a substantial portion of mandated students may not require extensive treatment. Stepped care may maximize treatment efficiency and greatly reduce the demands on…

  18. Chemical degradation of TMR multi-lure dispensers for fruit fly detection weathered under California climatic conditions

    USDA-ARS?s Scientific Manuscript database

    There are >160,000 federal and state fruit fly detection traps deployed in southern and western U.S. and Puerto Rico. In California alone, >100,000 traps are deployed and maintained just for exotic fruit flies detection. Fruit fly detection and eradication requires deployment of large numbers of tra...

  19. 24 CFR 968.103 - Allocation of funds under section 14.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... from previous years may remain in the reserve until allocated. The requirements governing the reserve... year cap. (Weighted at 206.5); (5) In the case of a large agency, the number of units with 2 or more... 5 years after such reduction, and consists of 50 percent of the published Total Development Cost for...

  20. 24 CFR 968.103 - Allocation of funds under section 14.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... from previous years may remain in the reserve until allocated. The requirements governing the reserve... year cap. (Weighted at 206.5); (5) In the case of a large agency, the number of units with 2 or more... 5 years after such reduction, and consists of 50 percent of the published Total Development Cost for...

  1. Closing the Gap: Education Requirements of the 21st Century Production Workforce

    ERIC Educational Resources Information Center

    Stone, Kyle B.; Kaminski, Karen; Gloeckner, Gene

    2009-01-01

    Due to the large number of individuals retiring over the next ten years a critical shortage of people available to work within the manufacturing industry is looming (Dychtwald, Erickson, & Morison, 2006). This shortage is exacerbated by the lack of a properly educated workforce that meets the demands of the 21st century manufacturer (Judy…

  2. Personal Meaning in the Public Sphere: The Standardisation and Rationalisation of Biodiversity Data in the UK and the Netherlands

    ERIC Educational Resources Information Center

    Lawrence, Anna; Turnhout, Esther

    2010-01-01

    The demand for biodiversity data is increasing. Governments require standardised, objective data to underpin planning and conservation decisions. These data are produced by large numbers of (volunteer) natural historians and non-governmental organisations. This article analyses the interface between the state and the volunteer naturalists to…

  3. A Comparison of the Handwriting Abilities of Secondary Students with Visual Impairments and Those of Sighted Students

    ERIC Educational Resources Information Center

    Harris-Brown, Talitha; Richmond, Janet; Maddalena, Sebastian Della; Jaworski, Alinta

    2015-01-01

    Despite the large number of people with visual impairments in Australia, all Western Australian secondary students are required to complete their secondary exams using handwriting, unless they qualify for special provisions. Students with visual impairments do not necessarily qualify for special provisions on the basis of their visual impairment…

  4. What's Culture Got to Do with It? Educational Research as a Necessarily Interdisciplinary Enterprise

    ERIC Educational Resources Information Center

    Cole, Michael

    2010-01-01

    The author examines the role of culture in education in historical perspective to suggest the conditions required to promote generalized educational reform. Although deliberate instruction appears to be a ubiquitous characteristic of human beings, schools arise only when large numbers of people begin to live in close proximity, using technologies…

  5. 76 FR 40898 - Final Priorities, Requirements, and Selection Criteria; Charter Schools Program (CSP) Grants for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-12

    ... schools. (3) A multi-year financial and operating model for the organization, a demonstrated commitment of... school model and to expand the number of high-quality charter schools available to students across the... percent threshold in this priority is consistent with the average percentage of students in large urban...

  6. Excellence for All: A Nietzschean-Inspired Approach in Professional Higher Education

    ERIC Educational Resources Information Center

    Joosten, Henriëtta

    2015-01-01

    Europe's objectives of economic growth and job creation require large numbers of professionals who are willing and able to innovate and rise above themselves. In this article, a concept of excellence is developed that can be broadly applied in professional higher education. This concept of excellence derives from three concepts which the German…

  7. Using Green Chemistry Principles as a Framework to Incorporate Research into the Organic Laboratory Curriculum

    ERIC Educational Resources Information Center

    Lee, Nancy E.; Gurney, Rich; Soltzberg, Leonard

    2014-01-01

    Despite the accepted pedagogical value of integrating research into the laboratory curriculum, this approach has not been widely adopted. The activation barrier to this change is high, especially in organic chemistry, where a large number of students are required to take this course, special glassware or setups may be needed, and dangerous…

  8. Measures of Strength and Fitness for Older Populations.

    ERIC Educational Resources Information Center

    Osness, Wayne H.; Hiebert, Lujean M.

    The overall strength of the musculature does not require testing of large numbers of muscle groups and can be accomplished from three or four tests. Small batteries of strength tests have been devised to predict total strength. The best combination of tests for males are thigh flexors, leg extensors, arm flexors, and pectoralis major. The battery…

  9. A Simplified Method for Tissue Engineering Skeletal Muscle Organoids in Vitro

    NASA Technical Reports Server (NTRS)

    Shansky, Janet; DelTatto, Michael; Chromiak, Joseph; Vandenburgh, Herman

    1996-01-01

    Tissue-engineered three dimensional skeletal muscle organ-like structures have been formed in vitro from primary myoblasts by several different techniques. This report describes a simplified method for generating large numbers of muscle organoids from either primary embryonic avian or neonatal rodent myoblasts, which avoids the requirements for stretching and other mechanical stimulation.

  10. Large-Corpus Phoneme and Word Recognition and the Generality of Lexical Context in CVC Word Perception

    ERIC Educational Resources Information Center

    Gelfand, Jessica T.; Christie, Robert E.; Gelfand, Stanley A.

    2014-01-01

    Purpose: Speech recognition may be analyzed in terms of recognition probabilities for perceptual wholes (e.g., words) and parts (e.g., phonemes), where j or the j-factor reveals the number of independent perceptual units required for recognition of the whole (Boothroyd, 1968b; Boothroyd & Nittrouer, 1988; Nittrouer & Boothroyd, 1990). For…

  11. Understanding Optical Trapping Phenomena: A Simulation for Undergraduates

    ERIC Educational Resources Information Center

    Mas, J.; Farre, A.; Cuadros, J.; Juvells, I.; Carnicer, A.

    2011-01-01

    Optical trapping is an attractive and multidisciplinary topic that has become the center of attention to a large number of researchers. Moreover, it is a suitable subject for advanced students that requires a knowledge of a wide range of topics. As a result, it has been incorporated into some syllabuses of both undergraduate and graduate programs.…

  12. Not Just "Rocks for Jocks": Who Are Introductory Geology Students and Why Are They Here?

    ERIC Educational Resources Information Center

    Gilbert, Lisa A.; Stempien, Jennifer; McConnell, David A.; Budd, David A.; van der Hoeven Kraft, Katrien J.; Bykerk-Kauffman, Ann; Jones, Megan H.; Knight, Catharine C.; Matheney, Ronald K.; Perkins, Dexter; Wirth, Karl R.

    2012-01-01

    Do students really enroll in Introductory Geology because they think it is "rocks for jocks"? In this study, we examine the widely held assumption that students view geology as a qualitative and remedial option for fulfilling a general education requirement. We present the first quantitative characterization of a large number of…

  13. Choose to Use: Scaffolding for Technology Learning Needs in a Project-Based Learning Environment

    ERIC Educational Resources Information Center

    Weimer, Peggy D.

    2017-01-01

    Project-based learning is one approach used by teachers to meet the challenge of developing more technologically proficient students. This approach, however, requires students to manage a large number of tasks including the mastery of technology. If a student's perception that their capability to perform a task falls below the task's difficulty,…

  14. The Importance of Improving the Nutritional Quality of Packed Lunches in U.S. Schools

    ERIC Educational Resources Information Center

    Misyak, Sarah; Farris, Alisha; Mann, Georgianna; Serrano, Elena

    2015-01-01

    Schools represent an ideal venue to influence dietary habits of large numbers of children. While the National School Lunch Program (NSLP) is mandated to meet clear nutrition standards for calories, whole grains, fruits, vegetables, milk, sodium, fat, and saturated fat, there are no nutritional requirements for packed lunches. This Current Issue…

  15. No Randomization? No Problem: Experimental Control and Random Assignment in Single Case Research

    ERIC Educational Resources Information Center

    Ledford, Jennifer R.

    2018-01-01

    Randomization of large number of participants to different treatment groups is often not a feasible or preferable way to answer questions of immediate interest to professional practice. Single case designs (SCDs) are a class of research designs that are experimental in nature but require only a few participants, all of whom receive the…

  16. Multi-resource and multi-scale approaches for meeting the challenge of managing multiple species

    Treesearch

    Frank R. Thompson; Deborah M. Finch; John R. Probst; Glen D. Gaines; David S. Dobkin

    1999-01-01

    The large number of Neotropical migratory bird (NTMB) species and their diverse habitat requirements create conflicts and difficulties for land managers and conservationists. We provide examples of assessments or conservation efforts that attempt to address the problem of managing for multiple NTMB species. We advocate approaches at a variety of spatial and geographic...

  17. Experiment requirements document for reflight of the small helium-cooled infrared telescope experiment

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The four astronomical objectives addressed include: the measurement and mapping of extended low surface brightness infrared emission from the galaxy; the measurement of diffuse emission from intergalactic material and/or galaxies and quasi-stellar objects; the measurement of the zodiacal dust emission; and the measurement of a large number of discrete infrared sources.

  18. Sanitary Schoolhouses: Legal Requirements in Indiana and Ohio. Bulletin, 1913, No. 52. Whole Number 563

    ERIC Educational Resources Information Center

    United States Bureau of Education, Department of the Interior, 1913

    1913-01-01

    Scores of millions of dollars are spent annually in the United States for new school buildings. With this large. expenditure has come a general desire that schoolhouses shall be usable, healthful, comfortable, and beautiful. Educators and architects have united in devising plans for school buildings. This bureau has published a valuable bulletin…

  19. APL: An Alternative to the Multi-Language Environment for Education. Systems Research Memo Number Four.

    ERIC Educational Resources Information Center

    Lippert, Henry T.; Harris, Edward V.

    The diverse requirements for computing facilities in education place heavy demands upon available resources. Although multiple or very large computers can supply such diverse needs, their cost makes them impractical for many institutions. Small computers which serve a few specific needs may be an economical answer. However, to serve operationally…

  20. Single-Case Experimental Designs in Educational Research: A Methodology for Causal Analyses in Teaching and Learning

    ERIC Educational Resources Information Center

    Plavnick, Joshua B.; Ferreri, Summer J.

    2013-01-01

    Current legislation requires educational practices be informed by science. The effort to establish educational practices supported by science has, to date, emphasized experiments with large numbers of participants who are randomly assigned to an intervention or control condition. A potential limitation of such an emphasis at the expense of other…

  1. 14 CFR 91.23 - Truth-in-leasing clause requirement in leases and conditional sales contracts.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Aircraft Registration Branch, Attn: Technical Section, P.O. Box 25724, Oklahoma City, OK 73125; (2) A copy... the airport of departure; (ii) The departure time; and (iii) The registration number of the aircraft... contract of conditional sale involving a U.S.-registered large civil aircraft and entered into after...

  2. 14 CFR 91.23 - Truth-in-leasing clause requirement in leases and conditional sales contracts.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Aircraft Registration Branch, Attn: Technical Section, P.O. Box 25724, Oklahoma City, OK 73125; (2) A copy... the airport of departure; (ii) The departure time; and (iii) The registration number of the aircraft... contract of conditional sale involving a U.S.-registered large civil aircraft and entered into after...

  3. 14 CFR 91.23 - Truth-in-leasing clause requirement in leases and conditional sales contracts.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Aircraft Registration Branch, Attn: Technical Section, P.O. Box 25724, Oklahoma City, OK 73125; (2) A copy... the airport of departure; (ii) The departure time; and (iii) The registration number of the aircraft... contract of conditional sale involving a U.S.-registered large civil aircraft and entered into after...

  4. 14 CFR 91.23 - Truth-in-leasing clause requirement in leases and conditional sales contracts.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Aircraft Registration Branch, Attn: Technical Section, P.O. Box 25724, Oklahoma City, OK 73125; (2) A copy... the airport of departure; (ii) The departure time; and (iii) The registration number of the aircraft... contract of conditional sale involving a U.S.-registered large civil aircraft and entered into after...

  5. Workflow management in large distributed systems

    NASA Astrophysics Data System (ADS)

    Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.

    2011-12-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  6. A multi-scalar PDF approach for LES of turbulent spray combustion

    NASA Astrophysics Data System (ADS)

    Raman, Venkat; Heye, Colin

    2011-11-01

    A comprehensive joint-scalar probability density function (PDF) approach is proposed for large eddy simulation (LES) of turbulent spray combustion and tests are conducted to analyze the validity and modeling requirements. The PDF method has the advantage that the chemical source term appears closed but requires models for the small scale mixing process. A stable and consistent numerical algorithm for the LES/PDF approach is presented. To understand the modeling issues in the PDF method, direct numerical simulation of a spray flame at three different fuel droplet Stokes numbers and an equivalent gaseous flame are carried out. Assumptions in closing the subfilter conditional diffusion term in the filtered PDF transport equation are evaluated for various model forms. In addition, the validity of evaporation rate models in high Stokes number flows is analyzed.

  7. Indirect addressing and load balancing for faster solution to Mandelbrot Set on SIMD architectures

    NASA Technical Reports Server (NTRS)

    Tomboulian, Sherryl

    1989-01-01

    SIMD computers with local indirect addressing allow programs to have queues and buffers, making certain kinds of problems much more efficient. Examined here are a class of problems characterized by computations on data points where the computation is identical, but the convergence rate is data dependent. Normally, in this situation, the algorithm time is governed by the maximum number of iterations required by each point. Using indirect addressing allows a processor to proceed to the next data point when it is done, reducing the overall number of iterations required to approach the mean convergence rate when a sufficiently large problem set is solved. Load balancing techniques can be applied for additional performance improvement. Simulations of this technique applied to solving Mandelbrot Sets indicate significant performance gains.

  8. Nursing benefits of using an automated injection system for ictal brain single photon emission computed tomography.

    PubMed

    Vonhofen, Geraldine; Evangelista, Tonya; Lordeon, Patricia

    2012-04-01

    The traditional method of administering radioactive isotopes to pediatric patients undergoing ictal brain single photon emission computed tomography testing has been by manual injections. This method presents certain challenges for nursing, including time requirements and safety risks. This quality improvement project discusses the implementation of an automated injection system for isotope administration and its impact on staffing, safety, and nursing satisfaction. It was conducted in an epilepsy monitoring unit at a large urban pediatric facility. Results of this project showed a decrease in the number of nurses exposed to radiation and improved nursing satisfaction with the use of the automated injection system. In addition, there was a decrease in the number of nursing hours required during ictal brain single photon emission computed tomography testing.

  9. Report of the Plasma Physics and Environmental Perturbation Laboratory (PPEPL) working groups. Volume 1: Plasma probes, wakes, and sheaths working group

    NASA Technical Reports Server (NTRS)

    1974-01-01

    It is shown in this report that comprehensive in-situ study of all aspects of the entire zone disturbance caused by a body in a flowing plasma resulted in a large number if requirements on the shuttle-PPEPL facility. A large amount of necessary in-situ observation can be obtained by adopting appropriate modes of performing the experiments. Requirements are indicated for worthwhile studies, of some aspects of the problems, which can be carried out effectively while imposing relatively few constraints on the early missions. Considerations for the desired growth and improvement of the PPEPL to facilitate more complete studies in later missions are also discussed. For Vol. 2, see N74-28170; for Vol# 3, see N74-28171.

  10. An Operations Concept for the Next Generation VLA

    NASA Astrophysics Data System (ADS)

    Kepley, Amanda; McKinnon, Mark; Selina, Rob; Murphy, Eric Joseph; ngVLA project

    2018-01-01

    This poster presents an operations plan for the next generation VLA (ngVLA), which is a proposed 214 element interferometer operating from ~1-115GHz, located in the southwestern United States. The operations requirements for this instrument are driven by the large number of antennas spread out over a multi-state area and a cap on the operations budget of 3 times that of the current VLA. These constraints require that the maintenance is a continuous process and that individual antennas are self-sufficient, making flexible subarrays crucial. The ngVLA will produce science ready data products for its users, building on the pioneering work being currently done at ALMA and the JVLA. Finally, the ngVLA will adopt a user support model similar to those at other large facilities (ALMA, HST, JWST, etc).

  11. Multiplexing Short Primers for Viral Family PCR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, S N; Hiddessen, A L; Hara, C A

    We describe a Multiplex Primer Prediction (MPP) algorithm to build multiplex compatible primer sets for large, diverse, and unalignable sets of target sequences. The MPP algorithm is scalable to larger target sets than other available software, and it does not require a multiple sequence alignment. We applied it to questions in viral detection, and demonstrated that there are no universally conserved priming sequences among viruses and that it could require an unfeasibly large number of primers ({approx}3700 18-mers or {approx}2000 10-mers) to generate amplicons from all sequenced viruses. We then designed primer sets separately for each viral family, and formore » several diverse species such as foot-and-mouth disease virus, hemagglutinin and neuraminidase segments of influenza A virus, Norwalk virus, and HIV-1.« less

  12. Building occupancy simulation and data assimilation using a graph-based agent-oriented model

    NASA Astrophysics Data System (ADS)

    Rai, Sanish; Hu, Xiaolin

    2018-07-01

    Building occupancy simulation and estimation simulates the dynamics of occupants and estimates their real-time spatial distribution in a building. It requires a simulation model and an algorithm for data assimilation that assimilates real-time sensor data into the simulation model. Existing building occupancy simulation models include agent-based models and graph-based models. The agent-based models suffer high computation cost for simulating large numbers of occupants, and graph-based models overlook the heterogeneity and detailed behaviors of individuals. Recognizing the limitations of existing models, this paper presents a new graph-based agent-oriented model which can efficiently simulate large numbers of occupants in various kinds of building structures. To support real-time occupancy dynamics estimation, a data assimilation framework based on Sequential Monte Carlo Methods is also developed and applied to the graph-based agent-oriented model to assimilate real-time sensor data. Experimental results show the effectiveness of the developed model and the data assimilation framework. The major contributions of this work are to provide an efficient model for building occupancy simulation that can accommodate large numbers of occupants and an effective data assimilation framework that can provide real-time estimations of building occupancy from sensor data.

  13. Hybrid estimation of complex systems.

    PubMed

    Hofbaur, Michael W; Williams, Brian C

    2004-10-01

    Modern automated systems evolve both continuously and discretely, and hence require estimation techniques that go well beyond the capability of a typical Kalman Filter. Multiple model (MM) estimation schemes track these system evolutions by applying a bank of filters, one for each discrete system mode. Modern systems, however, are often composed of many interconnected components that exhibit rich behaviors, due to complex, system-wide interactions. Modeling these systems leads to complex stochastic hybrid models that capture the large number of operational and failure modes. This large number of modes makes a typical MM estimation approach infeasible for online estimation. This paper analyzes the shortcomings of MM estimation, and then introduces an alternative hybrid estimation scheme that can efficiently estimate complex systems with large number of modes. It utilizes search techniques from the toolkit of model-based reasoning in order to focus the estimation on the set of most likely modes, without missing symptoms that might be hidden amongst the system noise. In addition, we present a novel approach to hybrid estimation in the presence of unknown behavioral modes. This leads to an overall hybrid estimation scheme for complex systems that robustly copes with unforeseen situations in a degraded, but fail-safe manner.

  14. Disorder from the Bulk Ionic Liquid in Electric Double Layer Transistors

    DOE PAGES

    Petach, Trevor A.; Reich, Konstantin V.; Zhang, Xiao; ...

    2017-07-28

    Ionic liquid gating has a number of advantages over solid-state gating, especially for flexible or transparent devices and for applications requiring high carrier densities. But, the large number of charged ions near the channel inevitably results in Coulomb scattering, which limits the carrier mobility in otherwise clean systems. We develop a model for this Coulomb scattering. We then validate our model experimentally using ionic liquid gating of graphene across varying thicknesses of hexagonal boron nitride, demonstrating that disorder in the bulk ionic liquid often dominates the scattering.

  15. A parallel VLSI architecture for a digital filter using a number theoretic transform

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Reed, I. S.; Yeh, C. S.; Shao, H. M.

    1983-01-01

    The advantages of a very large scalee integration (VLSI) architecture for implementing a digital filter using fermat number transforms (FNT) are the following: It requires no multiplication. Only additions and bit rotations are needed. It alleviates the usual dynamic range limitation for long sequence FNT's. It utilizes the FNT and inverse FNT circuits 100% of the time. The lengths of the input data and filter sequences can be arbitraty and different. It is regular, simple, and expandable, and as a consequence suitable for VLSI implementation.

  16. Rapid Geometry Creation for Computer-Aided Engineering Parametric Analyses: A Case Study Using ComGeom2 for Launch Abort System Design

    NASA Technical Reports Server (NTRS)

    Hawke, Veronica; Gage, Peter; Manning, Ted

    2007-01-01

    ComGeom2, a tool developed to generate Common Geometry representation for multidisciplinary analysis, has been used to create a large set of geometries for use in a design study requiring analysis by two computational codes. This paper describes the process used to generate the large number of configurations and suggests ways to further automate the process and make it more efficient for future studies. The design geometry for this study is the launch abort system of the NASA Crew Launch Vehicle.

  17. Parameter identification of civil engineering structures

    NASA Technical Reports Server (NTRS)

    Juang, J. N.; Sun, C. T.

    1980-01-01

    This paper concerns the development of an identification method required in determining structural parameter variations for systems subjected to an extended exposure to the environment. The concept of structural identifiability of a large scale structural system in the absence of damping is presented. Three criteria are established indicating that a large number of system parameters (the coefficient parameters of the differential equations) can be identified by a few actuators and sensors. An eight-bay-fifteen-story frame structure is used as example. A simple model is employed for analyzing the dynamic response of the frame structure.

  18. The millennium development goals and household energy requirements in Nigeria.

    PubMed

    Ibitoye, Francis I

    2013-01-01

    Access to clean and affordable energy is critical for the realization of the United Nations' Millennium Development Goals, or MDGs. In many developing countries, a large proportion of household energy requirements is met by use of non-commercial fuels such as wood, animal dung, crop residues, etc., and the associated health and environmental hazards of these are well documented. In this work, a scenario analysis of energy requirements in Nigeria's households is carried out to compare estimates between 2005 and 2020 under a reference scenario, with estimates under the assumption that Nigeria will meet the millennium goals. Requirements for energy under the MDG scenario are measured by the impacts on energy use, of a reduction by half, in 2015, (a) the number of household without access to electricity for basic services, (b) the number of households without access to modern energy carriers for cooking, and (c) the number of families living in one-room households in Nigeria's overcrowded urban slums. For these to be achieved, household electricity consumption would increase by about 41% over the study period, while the use of modern fuels would more than double. This migration to the use of modern fuels for cooking results in a reduction in the overall fuelwood consumption, from 5 GJ/capita in 2005, to 2.9 GJ/capita in 2015.

  19. Large number limit of multifield inflation

    NASA Astrophysics Data System (ADS)

    Guo, Zhong-Kai

    2017-12-01

    We compute the tensor and scalar spectral index nt, ns, the tensor-to-scalar ratio r , and the consistency relation nt/r in the general monomial multifield slow-roll inflation models with potentials V ˜∑iλi|ϕi| pi . The general models give a novel relation that nt, ns and nt/r are all proportional to the logarithm of the number of fields Nf when Nf is getting extremely large with the order of magnitude around O (1040). An upper bound Nf≲N*eZ N* is given by requiring the slow variation parameter small enough where N* is the e -folding number and Z is a function of distributions of λi and pi. Besides, nt/r differs from the single-field result -1 /8 with substantial probability except for a few very special cases. Finally, we derive theoretical bounds r >2 /N* (r ≳0.03 ) and for nt, which can be tested by observation in the near future.

  20. Low-Cost Nested-MIMO Array for Large-Scale Wireless Sensor Applications.

    PubMed

    Zhang, Duo; Wu, Wen; Fang, Dagang; Wang, Wenqin; Cui, Can

    2017-05-12

    In modern communication and radar applications, large-scale sensor arrays have increasingly been used to improve the performance of a system. However, the hardware cost and circuit power consumption scale linearly with the number of sensors, which makes the whole system expensive and power-hungry. This paper presents a low-cost nested multiple-input multiple-output (MIMO) array, which is capable of providing O ( 2 N 2 ) degrees of freedom (DOF) with O ( N ) physical sensors. The sensor locations of the proposed array have closed-form expressions. Thus, the aperture size and number of DOF can be predicted as a function of the total number of sensors. Additionally, with the help of time-sequence-phase-weighting (TSPW) technology, only one receiver channel is required for sampling the signals received by all of the sensors, which is conducive to reducing the hardware cost and power consumption. Numerical simulation results demonstrate the effectiveness and superiority of the proposed array.

  1. Low-Cost Nested-MIMO Array for Large-Scale Wireless Sensor Applications

    PubMed Central

    Zhang, Duo; Wu, Wen; Fang, Dagang; Wang, Wenqin; Cui, Can

    2017-01-01

    In modern communication and radar applications, large-scale sensor arrays have increasingly been used to improve the performance of a system. However, the hardware cost and circuit power consumption scale linearly with the number of sensors, which makes the whole system expensive and power-hungry. This paper presents a low-cost nested multiple-input multiple-output (MIMO) array, which is capable of providing O(2N2) degrees of freedom (DOF) with O(N) physical sensors. The sensor locations of the proposed array have closed-form expressions. Thus, the aperture size and number of DOF can be predicted as a function of the total number of sensors. Additionally, with the help of time-sequence-phase-weighting (TSPW) technology, only one receiver channel is required for sampling the signals received by all of the sensors, which is conducive to reducing the hardware cost and power consumption. Numerical simulation results demonstrate the effectiveness and superiority of the proposed array. PMID:28498329

  2. Automation of Technology for Cancer Research.

    PubMed

    van der Ent, Wietske; Veneman, Wouter J; Groenewoud, Arwin; Chen, Lanpeng; Tulotta, Claudia; Hogendoorn, Pancras C W; Spaink, Herman P; Snaar-Jagalska, B Ewa

    2016-01-01

    Zebrafish embryos can be obtained for research purposes in large numbers at low cost and embryos develop externally in limited space, making them highly suitable for high-throughput cancer studies and drug screens. Non-invasive live imaging of various processes within the larvae is possible due to their transparency during development, and a multitude of available fluorescent transgenic reporter lines.To perform high-throughput studies, handling large amounts of embryos and larvae is required. With such high number of individuals, even minute tasks may become time-consuming and arduous. In this chapter, an overview is given of the developments in the automation of various steps of large scale zebrafish cancer research for discovering important cancer pathways and drugs for the treatment of human disease. The focus lies on various tools developed for cancer cell implantation, embryo handling and sorting, microfluidic systems for imaging and drug treatment, and image acquisition and analysis. Examples will be given of employment of these technologies within the fields of toxicology research and cancer research.

  3. Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework

    PubMed Central

    2012-01-01

    Background For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. Results We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. Conclusion The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources. PMID:23216909

  4. Large-scale Individual-based Models of Pandemic Influenza Mitigation Strategies

    NASA Astrophysics Data System (ADS)

    Kadau, Kai; Germann, Timothy; Longini, Ira; Macken, Catherine

    2007-03-01

    We have developed a large-scale stochastic simulation model to investigate the spread of a pandemic strain of influenza virus through the U.S. population of 281 million people, to assess the likely effectiveness of various potential intervention strategies including antiviral agents, vaccines, and modified social mobility (including school closure and travel restrictions) [1]. The heterogeneous population structure and mobility is based on available Census and Department of Transportation data where available. Our simulations demonstrate that, in a highly mobile population, restricting travel after an outbreak is detected is likely to delay slightly the time course of the outbreak without impacting the eventual number ill. For large basic reproductive numbers R0, we predict that multiple strategies in combination (involving both social and medical interventions) will be required to achieve a substantial reduction in illness rates. [1] T. C. Germann, K. Kadau, I. M. Longini, and C. A. Macken, Proc. Natl. Acad. Sci. (USA) 103, 5935-5940 (2006).

  5. Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework.

    PubMed

    Lewis, Steven; Csordas, Attila; Killcoyne, Sarah; Hermjakob, Henning; Hoopmann, Michael R; Moritz, Robert L; Deutsch, Eric W; Boyle, John

    2012-12-05

    For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources.

  6. Integrating complexity into data-driven multi-hazard supply chain network strategies

    USGS Publications Warehouse

    Long, Suzanna K.; Shoberg, Thomas G.; Ramachandran, Varun; Corns, Steven M.; Carlo, Hector J.

    2013-01-01

    Major strategies in the wake of a large-scale disaster have focused on short-term emergency response solutions. Few consider medium-to-long-term restoration strategies that reconnect urban areas to the national supply chain networks (SCN) and their supporting infrastructure. To re-establish this connectivity, the relationships within the SCN must be defined and formulated as a model of a complex adaptive system (CAS). A CAS model is a representation of a system that consists of large numbers of inter-connections, demonstrates non-linear behaviors and emergent properties, and responds to stimulus from its environment. CAS modeling is an effective method of managing complexities associated with SCN restoration after large-scale disasters. In order to populate the data space large data sets are required. Currently access to these data is hampered by proprietary restrictions. The aim of this paper is to identify the data required to build a SCN restoration model, look at the inherent problems associated with these data, and understand the complexity that arises due to integration of these data.

  7. DEM Based Modeling: Grid or TIN? The Answer Depends

    NASA Astrophysics Data System (ADS)

    Ogden, F. L.; Moreno, H. A.

    2015-12-01

    The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.

  8. Efficient Manufacturing of Therapeutic Mesenchymal Stromal Cells Using the Quantum Cell Expansion System

    PubMed Central

    Hanley, Patrick J.; Mei, Zhuyong; Durett, April G.; Cabreira-Harrison, Marie da Graca; Klis, Mariola; Li, Wei; Zhao, Yali; Yang, Bing; Parsha, Kaushik; Mir, Osman; Vahidy, Farhaan; Bloom, Debra; Rice, R. Brent; Hematti, Peiman; Savitz, Sean I; Gee, Adrian P.

    2014-01-01

    Background The use of bone marrow-derived mesenchymal stromal cells (MSCs) as a cellular therapy for various diseases, such as graft-versus-host-disease, diabetes, ischemic cardiomyopathy, and Crohn's disease has produced promising results in early-phase clinical trials. However, for widespread application and use in later phase studies, manufacture of these cells needs to be cost effective, safe, and reproducible. Current methods of manufacturing in flasks or cell factories are labor-intensive, involve a large number of open procedures, and require prolonged culture times. Methods We evaluated the Quantum Cell Expansion system for the expansion of large numbers of MSCs from unprocessed bone marrow in a functionally closed system and compared the results to a flask-based method currently in clinical trials. Results After only two passages, we were able to expand a mean of 6.6×108 MSCs from 25 mL of bone marrow reproducibly. The mean expansion time was 21 days, and cells obtained were able to differentiate into all three lineages: chondrocytes, osteoblasts, and adipocytes. The Quantum was able to generate the target cell number of 2.0×108 cells in an average of 9-fewer days and in half the number of passages required during flask-based expansion. We estimated the Quantum would involve 133 open procedures versus 54,400 in flasks when manufacturing for a clinical trial. Quantum-expanded MSCs infused into an ischemic stroke rat model were therapeutically active. Discussion The Quantum is a novel method of generating high numbers of MSCs in less time and at lower passages when compared to flasks. In the Quantum, the risk of contamination is substantially reduced due to the substantial decrease in open procedures. PMID:24726657

  9. Prior Authorization Requirements for Proprotein Convertase Subtilisin/Kexin Type 9 Inhibitors Across US Private and Public Payers.

    PubMed

    Doshi, Jalpa A; Puckett, Justin T; Parmacek, Michael S; Rader, Daniel J

    2018-01-01

    Proprotein convertase subtilisin/kexin type 9 inhibitors (PCSK9is) are an innovative treatment option for patients with familial hypercholesterolemia or clinical atherosclerotic cardiovascular disease who require further lowering of low-density lipoprotein cholesterol. However, the high costs of these agents have spurred payers to implement utilization management policies to ensure appropriate use. We examined prior authorization (PA) requirements for PCSK9is across private and public US payers. We conducted an analysis of 2016 formulary coverage and PA data from a large, proprietary database with information on policies governing >95% of Americans with prescription drug coverage (275.3 million lives) within 3872 plans across the 4 major insurance segments (commercial, health insurance exchange, Medicare, and Medicaid). The key measures included administrative PA criteria (prescriber specialty, number of criteria in PA policy or number of fields on PA form, requirements for medical record submission, reauthorization requirements) and clinical/diagnostic PA criteria (approved conditions, required laboratories or other tests, required concomitant therapy, step therapy requirements, continuation criteria) for each of 2 Food and Drug Administration-approved PCSK9is. Select measures (eg, number of PA criteria/fields, medical record submission requirements) were obtained for 2 comparator cardiometabolic drugs (ezetimibe and liraglutide). Between 82% and 97% of individuals were enrolled in plans implementing PA for PCSK9is (depending on insurance segment), and one third to two thirds of these enrollees faced PAs restricting PCSK9i prescribing to a specialist. For patients with familial hypercholesterolemia, diagnostic confirmation via genetic testing or meeting minimum clinical scores/criteria was also required. PA requirements were more extensive for PCSK9is as compared with the other cardiometabolic drugs (ie, contained 3×-11× the number of PA criteria or fields on PA forms and more frequently involved the submission of medical records as supporting documentation). PA requirements for PCSK9is are greater than for selected other drugs within the cardiometabolic disease area, raising concerns about whether payer policies to discourage inappropriate use may also be restricting access to these drugs in patients who need them. © 2018 American Heart Association, Inc.

  10. Building large area CZT imaging detectors for a wide-field hard X-ray telescope—ProtoEXIST1

    NASA Astrophysics Data System (ADS)

    Hong, J.; Allen, B.; Grindlay, J.; Chammas, N.; Barthelemy, S.; Baker, R.; Gehrels, N.; Nelson, K. E.; Labov, S.; Collins, J.; Cook, W. R.; McLean, R.; Harrison, F.

    2009-07-01

    We have constructed a moderately large area (32cm), fine pixel (2.5 mm pixel, 5 mm thick) CZT imaging detector which constitutes the first section of a detector module (256cm) developed for a balloon-borne wide-field hard X-ray telescope, ProtoEXIST1. ProtoEXIST1 is a prototype for the High Energy Telescope (HET) in the Energetic X-ray imaging Survey Telescope (EXIST), a next generation space-borne multi-wavelength telescope. We have constructed a large (nearly gapless) detector plane through a modularization scheme by tiling of a large number of 2cm×2cm CZT crystals. Our innovative packaging method is ideal for many applications such as coded-aperture imaging, where a large, continuous detector plane is desirable for the optimal performance. Currently we have been able to achieve an energy resolution of 3.2 keV (FWHM) at 59.6 keV on average, which is exceptional considering the moderate pixel size and the number of detectors in simultaneous operation. We expect to complete two modules (512cm) within the next few months as more CZT becomes available. We plan to test the performance of these detectors in a near space environment in a series of high altitude balloon flights, the first of which is scheduled for Fall 2009. These detector modules are the first in a series of progressively more sophisticated detector units and packaging schemes planned for ProtoEXIST2 & 3, which will demonstrate the technology required for the advanced CZT imaging detectors (0.6 mm pixel, 4.5m area) required in EXIST/HET.

  11. Large number discrimination by mosquitofish.

    PubMed

    Agrillo, Christian; Piffer, Laura; Bisazza, Angelo

    2010-12-22

    Recent studies have demonstrated that fish display rudimentary numerical abilities similar to those observed in mammals and birds. The mechanisms underlying the discrimination of small quantities (<4) were recently investigated while, to date, no study has examined the discrimination of large numerosities in fish. Subjects were trained to discriminate between two sets of small geometric figures using social reinforcement. In the first experiment mosquitofish were required to discriminate 4 from 8 objects with or without experimental control of the continuous variables that co-vary with number (area, space, density, total luminance). Results showed that fish can use the sole numerical information to compare quantities but that they preferentially use cumulative surface area as a proxy of the number when this information is available. A second experiment investigated the influence of the total number of elements to discriminate large quantities. Fish proved to be able to discriminate up to 100 vs. 200 objects, without showing any significant decrease in accuracy compared with the 4 vs. 8 discrimination. The third experiment investigated the influence of the ratio between the numerosities. Performance was found to decrease when decreasing the numerical distance. Fish were able to discriminate numbers when ratios were 1:2 or 2:3 but not when the ratio was 3:4. The performance of a sample of undergraduate students, tested non-verbally using the same sets of stimuli, largely overlapped that of fish. Fish are able to use pure numerical information when discriminating between quantities larger than 4 units. As observed in human and non-human primates, the numerical system of fish appears to have virtually no upper limit while the numerical ratio has a clear effect on performance. These similarities further reinforce the view of a common origin of non-verbal numerical systems in all vertebrates.

  12. Wall-Resolved Large-Eddy Simulation of Flow Separation Over NASA Wall-Mounted Hump

    NASA Technical Reports Server (NTRS)

    Uzun, Ali; Malik, Mujeeb R.

    2017-01-01

    This paper reports the findings from a study that applies wall-resolved large-eddy simulation to investigate flow separation over the NASA wall-mounted hump geometry. Despite its conceptually simple flow configuration, this benchmark problem has proven to be a challenging test case for various turbulence simulation methods that have attempted to predict flow separation arising from the adverse pressure gradient on the aft region of the hump. The momentum-thickness Reynolds number of the incoming boundary layer has a value that is near the upper limit achieved by recent direct numerical simulation and large-eddy simulation of incompressible turbulent boundary layers. The high Reynolds number of the problem necessitates a significant number of grid points for wall-resolved calculations. The present simulations show a significant improvement in the separation-bubble length prediction compared to Reynolds-Averaged Navier-Stokes calculations. The current simulations also provide good overall prediction of the skin-friction distribution, including the relaminarization observed over the front portion of the hump due to the strong favorable pressure gradient. We discuss a number of problems that were encountered during the course of this work and present possible solutions. A systematic study regarding the effect of domain span, subgrid-scale model, tunnel back pressure, upstream boundary layer conditions and grid refinement is performed. The predicted separation-bubble length is found to be sensitive to the span of the domain. Despite the large number of grid points used in the simulations, some differences between the predictions and experimental observations still exist (particularly for Reynolds stresses) in the case of the wide-span simulation, suggesting that additional grid resolution may be required.

  13. Clinical research in Finland in 2002 and 2007: quantity and type

    PubMed Central

    2013-01-01

    Background Regardless of worries over clinical research and various initiatives to overcome problems, few quantitative data on the numbers and type of clinical research exist. This article aims to describe the volume and type of clinical research in 2002 and 2007 in Finland. Methods The research law in Finland requires all medical research to be submitted to regional ethics committees (RECs). Data from all new projects in 2002 and 2007 were collected from REC files and the characteristics of clinical projects (76% of all submissions) were analyzed. Results The number of clinical projects was large, but declining: 794 in 2002 and 762 in 2007. Drug research (mainly trials) represented 29% and 34% of the clinical projects; their total number had not declined, but those without a commercial sponsor had. The number of different principal investigators was large (630 and 581). Most projects were observational, while an experimental design was used in 43% of projects. Multi-center studies were common. In half of the projects, the main funder was health care or was done as unpaid work; 31% had industry funding as the main source. There was a clear difference in the type of research by sponsorship. Industry-funded research was largely drug research, international multi-center studies, with randomized controlled or other experimental design. The findings for the two years were similar, but a university hospital as the main research site became less common between 2002 and 2007. Conclusions Clinical research projects were common, but numbers are declining; research was largely funded by health care, with many physicians involved. Drug trials were a minority, even though most research promotion efforts and regulation concerns them. PMID:23680289

  14. Clinical research in Finland in 2002 and 2007: quantity and type.

    PubMed

    Hemminki, Elina; Virtanen, Jorma; Veerus, Piret; Regushevskaya, Elena

    2013-05-16

    Regardless of worries over clinical research and various initiatives to overcome problems, few quantitative data on the numbers and type of clinical research exist. This article aims to describe the volume and type of clinical research in 2002 and 2007 in Finland. The research law in Finland requires all medical research to be submitted to regional ethics committees (RECs). Data from all new projects in 2002 and 2007 were collected from REC files and the characteristics of clinical projects (76% of all submissions) were analyzed. The number of clinical projects was large, but declining: 794 in 2002 and 762 in 2007. Drug research (mainly trials) represented 29% and 34% of the clinical projects; their total number had not declined, but those without a commercial sponsor had. The number of different principal investigators was large (630 and 581). Most projects were observational, while an experimental design was used in 43% of projects. Multi-center studies were common. In half of the projects, the main funder was health care or was done as unpaid work; 31% had industry funding as the main source. There was a clear difference in the type of research by sponsorship. Industry-funded research was largely drug research, international multi-center studies, with randomized controlled or other experimental design. The findings for the two years were similar, but a university hospital as the main research site became less common between 2002 and 2007. Clinical research projects were common, but numbers are declining; research was largely funded by health care, with many physicians involved. Drug trials were a minority, even though most research promotion efforts and regulation concerns them.

  15. Large-eddy simulation of wind turbine wake interactions on locally refined Cartesian grids

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2014-11-01

    Performing high-fidelity numerical simulations of turbulent flow in wind farms remains a challenging issue mainly because of the large computational resources required to accurately simulate the turbine wakes and turbine/turbine interactions. The discretization of the governing equations on structured grids for mesoscale calculations may not be the most efficient approach for resolving the large disparity of spatial scales. A 3D Cartesian grid refinement method enabling the efficient coupling of the Actuator Line Model (ALM) with locally refined unstructured Cartesian grids adapted to accurately resolve tip vortices and multi-turbine interactions, is presented. Second order schemes are employed for the discretization of the incompressible Navier-Stokes equations in a hybrid staggered/non-staggered formulation coupled with a fractional step method that ensures the satisfaction of local mass conservation to machine zero. The current approach enables multi-resolution LES of turbulent flow in multi-turbine wind farms. The numerical simulations are in good agreement with experimental measurements and are able to resolve the rich dynamics of turbine wakes on grids containing only a small fraction of the grid nodes that would be required in simulations without local mesh refinement. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the National Science Foundation under Award number NSF PFI:BIC 1318201.

  16. An Efficient and Versatile Means for Assembling and Manufacturing Systems in Space

    NASA Technical Reports Server (NTRS)

    Dorsey, John T.; Doggett, William R.; Hafley, Robert A.; Komendera, Erik; Correll, Nikolaus; King, Bruce

    2012-01-01

    Within NASA Space Science, Exploration and the Office of Chief Technologist, there are Grand Challenges and advanced future exploration, science and commercial mission applications that could benefit significantly from large-span and large-area structural systems. Of particular and persistent interest to the Space Science community is the desire for large (in the 10- 50 meter range for main aperture diameter) space telescopes that would revolutionize space astronomy. Achieving these systems will likely require on-orbit assembly, but previous approaches for assembling large-scale telescope truss structures and systems in space have been perceived as very costly because they require high precision and custom components. These components rely on a large number of mechanical connections and supporting infrastructure that are unique to each application. In this paper, a new assembly paradigm that mitigates these concerns is proposed and described. A new assembly approach, developed to implement the paradigm, is developed incorporating: Intelligent Precision Jigging Robots, Electron-Beam welding, robotic handling/manipulation, operations assembly sequence and path planning, and low precision weldable structural elements. Key advantages of the new assembly paradigm, as well as concept descriptions and ongoing research and technology development efforts for each of the major elements are summarized.

  17. The cost of large numbers of hypothesis tests on power, effect size and sample size.

    PubMed

    Lazzeroni, L C; Ray, A

    2012-01-01

    Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing.

  18. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2015-08-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  19. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Cuntz, Matthias; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2016-04-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  20. PIK3CA mutant tumors depend on oxoglutarate dehydrogenase | Office of Cancer Genomics

    Cancer.gov

    Oncogenic PIK3CA mutations are found in a significant fraction of human cancers, but therapeutic inhibition of PI3K has only shown limited success in clinical trials. To understand how mutant PIK3CA contributes to cancer cell proliferation, we used genome scale loss-of-function screening in a large number of genomically annotated cancer cell lines. As expected, we found that PIK3CA mutant cancer cells require PIK3CA but also require the expression of the TCA cycle enzyme 2-oxoglutarate dehydrogenase (OGDH).

  1. An adaptive vector quantization scheme

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.

    1990-01-01

    Vector quantization is known to be an effective compression scheme to achieve a low bit rate so as to minimize communication channel bandwidth and also to reduce digital memory storage while maintaining the necessary fidelity of the data. However, the large number of computations required in vector quantizers has been a handicap in using vector quantization for low-rate source coding. An adaptive vector quantization algorithm is introduced that is inherently suitable for simple hardware implementation because it has a simple architecture. It allows fast encoding and decoding because it requires only addition and subtraction operations.

  2. Sensemaking in a Value Based Context for Large Scale Complex Engineered Systems

    NASA Astrophysics Data System (ADS)

    Sikkandar Basha, Nazareen

    The design and the development of Large-Scale Complex Engineered Systems (LSCES) requires the involvement of multiple teams and numerous levels of the organization and interactions with large numbers of people and interdisciplinary departments. Traditionally, requirements-driven Systems Engineering (SE) is used in the design and development of these LSCES. The requirements are used to capture the preferences of the stakeholder for the LSCES. Due to the complexity of the system, multiple levels of interactions are required to elicit the requirements of the system within the organization. Since LSCES involves people and interactions between the teams and interdisciplinary departments, it should be socio-technical in nature. The elicitation of the requirements of most large-scale system projects are subjected to creep in time and cost due to the uncertainty and ambiguity of requirements during the design and development. In an organization structure, the cost and time overrun can occur at any level and iterate back and forth thus increasing the cost and time. To avoid such creep past researches have shown that rigorous approaches such as value based designing can be used to control it. But before the rigorous approaches can be used, the decision maker should have a proper understanding of requirements creep and the state of the system when the creep occurs. Sensemaking is used to understand the state of system when the creep occurs and provide a guidance to decision maker. This research proposes the use of the Cynefin framework, sensemaking framework which can be used in the design and development of LSCES. It can aide in understanding the system and decision making to minimize the value gap due to requirements creep by eliminating ambiguity which occurs during design and development. A sample hierarchical organization is used to demonstrate the state of the system at the occurrence of requirements creep in terms of cost and time using the Cynefin framework. These trials are continued for different requirements and at different sub-system level. The results obtained show that the Cynefin framework can be used to improve the value of the system and can be used for predictive analysis. The decision makers can use these findings and use rigorous approaches and improve the design of Large Scale Complex Engineered Systems.

  3. Discussion about photodiode architectures for space applications

    NASA Astrophysics Data System (ADS)

    Gravrand, O.; Destefanis, G.; Cervera, C.; Zanatta, J.-P.; Baier, N.; Ferron, A.; Boulade, O.

    2017-11-01

    Detection for space application is very demanding on the IR detector: all wavelengths, from visible-NIR (2- 3um cutoff) to LWIR (10-12.5um cutoff), even sometimes VLWIR (15um cutoff) may be of interest. Moreover, various scenarii are usually considered. Some are imaging applications where the focal plane array (FPA) is used as an optical element to sense an image. However, the FPA may also be used in spectrometric applications where light is triggered on the different pixels depending on its wavelength. In some cases, star pointing is another use of FPAs where the retina is used to sense the position of the satellite. In all those configurations, we might distinguish several categories of applications: • low flux applications where the FPA is staring at space and the detection occurs with only a few number of photons. • high flux applications where the FPA is usually staring at the earth. In this case, the black body emission of the earth and its atmosphere ensures usually a large number of photons to perform the detection. Those two different categories are highly dimensioning for the detector as it usually determines the level of dark current and quantum efficiency (QE) requirements. Indeed, high detection performance usually requires a large number of integrated photons such that high QE is needed for low flux applications, in order to limit the integration time as much as possible. Moreover, dark current requirement is also directly linked to the expected incoming flux, in order to limit as much as possible the SNR degradation due to dark charges vs photocharges. Note that in most cases, this dark current is highly depending on operating temperature which dominates detector consumption. A classical way to mitigate dark current is to cool down the detector to very low temperatures. This paper won't discuss the need for wavefront sensing where the number of detected photons is low because of a very narrow integration window. Rigorously, this kind of configuration is a low flux application but the need for speed distinguishes it from other low flux applications as it usually requires a different ROIC architecture and a photodiode optimized for high response speed.

  4. The NASA Space Launch System Program Systems Engineering Approach for Affordability

    NASA Technical Reports Server (NTRS)

    Hutt, John J.; Whitehead, Josh; Hanson, John

    2017-01-01

    The National Aeronautics and Space Administration is currently developing the Space Launch System to provide the United States with a capability to launch large Payloads into Low Earth orbit and deep space. One of the development tenets of the SLS Program is affordability. One initiative to enhance affordability is the SLS approach to requirements definition, verification and system certification. The key aspects of this initiative include: 1) Minimizing the number of requirements, 2) Elimination of explicit verification requirements, 3) Use of certified models of subsystem capability in lieu of requirements when appropriate and 4) Certification of capability beyond minimum required capability. Implementation of each aspect is described and compared to a "typical" systems engineering implementation, including a discussion of relative risk. Examples of each implementation within the SLS Program are provided.

  5. The Global Ozone and Aerosol Profiles and Aerosol Hygroscopic Effect and Absorption Optical Depth (GOA2HEAD) Network Initiative

    NASA Astrophysics Data System (ADS)

    Gao, R. S.; Elkins, J. W.; Frost, G. J.; McComiskey, A. C.; Murphy, D. M.; Ogren, J. A.; Petropavlovskikh, I. V.; Rosenlof, K. H.

    2014-12-01

    Inverse modeling using measurements of ozone (O3) and aerosol is a powerful tool for deriving pollutant emissions. Because they have relatively long lifetimes, O3 and aerosol are transported over large distances. Frequent and globally spaced vertical profiles rather than ground-based measurements alone are therefore highly desired. Three requirements necessary for a successful global monitoring program are: Low equipment cost, low operation cost, and reliable measurements of known uncertainty. Conventional profiling using aircraft provides excellent data, but is cost prohibitive on a large scale. Here we describe a new platform and instruments meeting all three global monitoring requirements. The platform consists of a small balloon and an auto-homing glider. The glider is released from the balloon at about 5 km altitude, returning the light instrument package to the launch location, and allowing for consistent recovery of the payload. Atmospheric profiling can be performed either during ascent or descent (or both) depending on measurement requirements. We will present the specifications for two instrument packages currently under development. The first measures O3, RH, p, T, dry aerosol particle number and size distribution, and aerosol optical depth. The second measures dry aerosol particle number and size distribution, and aerosol absorption coefficient. Other potential instrument packages and the desired spatial/temporal resolution for the GOA2HEAD monitoring initiative will also be discussed.

  6. Air Layer Drag Reduction

    NASA Astrophysics Data System (ADS)

    Ceccio, Steven; Elbing, Brian; Winkel, Eric; Dowling, David; Perlin, Marc

    2008-11-01

    A set of experiments have been conducted at the US Navy's Large Cavitation Channel to investigate skin-friction drag reduction with the injection of air into a high Reynolds number turbulent boundary layer. Testing was performed on a 12.9 m long flat-plate test model with the surface hydraulically smooth and fully rough at downstream-distance-based Reynolds numbers to 220 million and at speeds to 20 m/s. Local skin-friction, near-wall bulk void fraction, and near-wall bubble imaging were monitored along the length of the model. The instrument suite was used to access the requirements necessary to achieve air layer drag reduction (ALDR). Injection of air over a wide range of air fluxes showed that three drag reduction regimes exist when injecting air; (1) bubble drag reduction that has poor downstream persistence, (2) a transitional regime with a steep rise in drag reduction, and (3) ALDR regime where the drag reduction plateaus at 90% ± 10% over the entire model length with large void fractions in the near-wall region. These investigations revealed several requirements for ALDR including; sufficient volumetric air fluxes that increase approximately with the square of the free-stream speed, slightly higher air fluxes are needed when the surface tension is reduced, higher air fluxes are required for rough surfaces, and the formation of ALDR is sensitive to the inlet condition.

  7. 2014 Summer Series - Ethiraj Venkatapathy - Mary Poppins Approach to Human Mars Mission Entry, Descent and Landing

    NASA Image and Video Library

    2014-06-17

    NASA is investing in a number of technologies to extend Entry, Descent and Landing (EDL) capabilities to enable Human Missions to Mars. These technologies will also enable robotic Science missions. Human missions will require landing payloads of 10?s of metric tons, not possible with today's technology. Decelerating from entry speeds around 15,000 miles per hour to landing in a matter of minutes will require very large drag or deceleration. The one way to achieve required deceleration is to deploy a large surface that can be stowed during launch and deployed prior to entry. This talk will highlight a simple concept similar to an umbrella. Though the concept is simple, the size required for human Mars missions and the heating encountered during entry are significant challenges. The mechanically deployable system can also enable robotic science missions to Venus and is also equally applicable for bringing back cube-satellites and other small payloads. The scalable concept called Adaptive Deployable Entry and Placement Technology (ADEPT) is under development and is the focus of this talk.

  8. Watermarking-based protection of remote sensing images: requirements and possible solutions

    NASA Astrophysics Data System (ADS)

    Barni, Mauro; Bartolini, Franco; Cappellini, Vito; Magli, Enrico; Olmo, Gabriella

    2001-12-01

    Earth observation missions have recently attracted ag rowing interest form the scientific and industrial communities, mainly due to the large number of possible applications capable to exploit remotely sensed data and images. Along with the increase of market potential, the need arises for the protection of the image products from non-authorized use. Such a need is a very crucial one even because the Internet and other public/private networks have become preferred means of data exchange. A crucial issue arising when dealing with digital image distribution is copyright protection. Such a problem has been largely addressed by resorting to watermarking technology. A question that obviously arises is whether the requirements imposed by remote sensing imagery are compatible with existing watermarking techniques. On the basis of these motivations, the contribution of this work is twofold: i) assessment of the requirements imposed by the characteristics of remotely sensed images on watermark-based copyright protection ii) analysis of the state-of-the-art, and performance evaluation of existing algorithms in terms of the requirements at the previous point.

  9. Computational model for chromosomal instabilty

    NASA Astrophysics Data System (ADS)

    Zapperi, Stefano; Bertalan, Zsolt; Budrikis, Zoe; La Porta, Caterina

    2015-03-01

    Faithful segregation of genetic material during cell division requires alignment of the chromosomes between the spindle poles and attachment of their kinetochores to each of the poles. Failure of these complex dynamical processes leads to chromosomal instability (CIN), a characteristic feature of several diseases including cancer. While a multitude of biological factors regulating chromosome congression and bi-orientation have been identified, it is still unclear how they are integrated into a coherent picture. Here we address this issue by a three dimensional computational model of motor-driven chromosome congression and bi-orientation. Our model reveals that successful cell division requires control of the total number of microtubules: if this number is too small bi-orientation fails, while if it is too large not all the chromosomes are able to congress. The optimal number of microtubules predicted by our model compares well with early observations in mammalian cell spindles. Our results shed new light on the origin of several pathological conditions related to chromosomal instability.

  10. Accuracy and Precision of Radioactivity Quantification in Nuclear Medicine Images

    PubMed Central

    Frey, Eric C.; Humm, John L.; Ljungberg, Michael

    2012-01-01

    The ability to reliably quantify activity in nuclear medicine has a number of increasingly important applications. Dosimetry for targeted therapy treatment planning or for approval of new imaging agents requires accurate estimation of the activity in organs, tumors, or voxels at several imaging time points. Another important application is the use of quantitative metrics derived from images, such as the standard uptake value commonly used in positron emission tomography (PET), to diagnose and follow treatment of tumors. These measures require quantification of organ or tumor activities in nuclear medicine images. However, there are a number of physical, patient, and technical factors that limit the quantitative reliability of nuclear medicine images. There have been a large number of improvements in instrumentation, including the development of hybrid single-photon emission computed tomography/computed tomography and PET/computed tomography systems, and reconstruction methods, including the use of statistical iterative reconstruction methods, which have substantially improved the ability to obtain reliable quantitative information from planar, single-photon emission computed tomography, and PET images. PMID:22475429

  11. Cortical responses following simultaneous and sequential retinal neurostimulation with different return configurations.

    PubMed

    Barriga-Rivera, Alejandro; Morley, John W; Lovell, Nigel H; Suaning, Gregg J

    2016-08-01

    Researchers continue to develop visual prostheses towards safer and more efficacious systems. However limitations still exist in the number of stimulating channels that can be integrated. Therefore there is a need for spatial and time multiplexing techniques to provide improved performance of the current technology. In particular, bright and high-contrast visual scenes may require simultaneous activation of several electrodes. In this research, a 24-electrode array was suprachoroidally implanted in three normally-sighted cats. Multi-unit activity was recorded from the primary visual cortex. Four stimulation strategies were contrasted to provide activation of seven electrodes arranged hexagonally: simultaneous monopolar, sequential monopolar, sequential bipolar and hexapolar. Both monopolar configurations showed similar cortical activation maps. Hexapolar and sequential bipolar configurations activated a lower number of cortical channels. Overall, the return configuration played a more relevant role in cortical activation than time multiplexing and thus, rapid sequential stimulation may assist in reducing the number of channels required to activate large retinal areas.

  12. Efficient Bayesian mixed model analysis increases association power in large cohorts

    PubMed Central

    Loh, Po-Ru; Tucker, George; Bulik-Sullivan, Brendan K; Vilhjálmsson, Bjarni J; Finucane, Hilary K; Salem, Rany M; Chasman, Daniel I; Ridker, Paul M; Neale, Benjamin M; Berger, Bonnie; Patterson, Nick; Price, Alkes L

    2014-01-01

    Linear mixed models are a powerful statistical tool for identifying genetic associations and avoiding confounding. However, existing methods are computationally intractable in large cohorts, and may not optimize power. All existing methods require time cost O(MN2) (where N = #samples and M = #SNPs) and implicitly assume an infinitesimal genetic architecture in which effect sizes are normally distributed, which can limit power. Here, we present a far more efficient mixed model association method, BOLT-LMM, which requires only a small number of O(MN)-time iterations and increases power by modeling more realistic, non-infinitesimal genetic architectures via a Bayesian mixture prior on marker effect sizes. We applied BOLT-LMM to nine quantitative traits in 23,294 samples from the Women’s Genome Health Study (WGHS) and observed significant increases in power, consistent with simulations. Theory and simulations show that the boost in power increases with cohort size, making BOLT-LMM appealing for GWAS in large cohorts. PMID:25642633

  13. Evaluation of the eigenvalue method in the solution of transient heat conduction problems

    NASA Astrophysics Data System (ADS)

    Landry, D. W.

    1985-01-01

    The eigenvalue method is evaluated to determine the advantages and disadvantages of the method as compared to fully explicit, fully implicit, and Crank-Nicolson methods. Time comparisons and accuracy comparisons are made in an effort to rank the eigenvalue method in relation to the comparison schemes. The eigenvalue method is used to solve the parabolic heat equation in multidimensions with transient temperatures. Extensions into three dimensions are made to determine the method's feasibility in handling large geometry problems requiring great numbers of internal mesh points. The eigenvalue method proves to be slightly better in accuracy than the comparison routines because of an exact treatment, as opposed to a numerical approximation, of the time derivative in the heat equation. It has the potential of being a very powerful routine in solving long transient type problems. The method is not well suited to finely meshed grid arrays or large regions because of the time and memory requirements necessary for calculating large sets of eigenvalues and eigenvectors.

  14. The AzTEC millimeter-wave camera: Design, integration, performance, and the characterization of the (sub-)millimeter galaxy population

    NASA Astrophysics Data System (ADS)

    Austermann, Jason Edward

    One of the primary drivers in the development of large format millimeter detector arrays is the study of sub-millimeter galaxies (SMGs) - a population of very luminous high-redshift dust-obscured starbursts that are widely believed to be the dominant contributor to the Far-Infrared Background (FIB). The characterization of such a population requires the ability to map large patches of the (sub-)millimeter sky to high sensitivity within a feasible amount of time. I present this dissertation on the design, integration, and characterization of the 144-pixel AzTEC millimeter-wave camera and its application to the study of the sub-millimeter galaxy population. In particular, I present an unprecedented characterization of the "blank-field" (fields with no known mass bias) SMG number counts by mapping over 0.5 deg^2 to 1.1mm depths of ~1mJy - a previously unattained depth on these scales. This survey provides the tightest SMG number counts available, particularly for the brightest and rarest SMGs that require large survey areas for a significant number of detections. These counts are compared to the predictions of various models of the evolving mm/sub-mm source population, providing important constraints for the ongoing refinement of semi-analytic and hydrodynamical models of galaxy formation. I also present the results of an AzTEC 0.15 deg^2 survey of the COSMOS field, which uncovers a significant over-density of bright SMGs that are spatially correlated to foreground mass structures, presumably as a result of gravitational lensing. Finally, I compare the results of the available SMG surveys completed to date and explore the effects of cosmic variance on the interpretation of individual surveys.

  15. Current and future worldwide prevalence of dependency, its relationship to total population, and dependency ratios.

    PubMed Central

    Harwood, Rowan H.; Sayer, Avan Aihie; Hirschfeld, Miriam

    2004-01-01

    OBJECTIVE: To estimate the number of people worldwide requiring daily assistance from another person in carrying out health, domestic or personal tasks. METHODS: Data from the Global Burden of Disease Study were used to calculate the prevalence of severe levels of disability, and consequently, to estimate dependency. Population projections were used to forecast changes over the next 50 years. FINDINGS: The greatest burden of dependency currently falls in sub-Saharan Africa, where the "dependency ratio" (ratio of dependent people to the population of working age) is about 10%, compared with 7-8% elsewhere. Large increases in prevalence are predicted in sub-Saharan Africa, the Middle East, Asia and Latin America of up to 5-fold or 6-fold in some cases. These increases will occur in the context of generally increasing populations, and dependency ratios will increase modestly to about 10%. The dependency ratio will increase more in China (14%) and India (12%) than in other areas with large prevalence increases. Established market economies, especially Europe and Japan, will experience modest increases in the prevalence of dependency (30%), and in the dependency ratio (up to 10%). Former Socialist economies of Europe will have static or declining numbers of dependent people, but will have large increases in the dependency ratio (up to 13%). CONCLUSION: Many countries will be greatly affected by the increasing number of dependent people and will need to identify the human and financial resources to support them. Much improved collection of data on disability and on the needs of caregivers is required. The prevention of disability and provision of support for caregivers needs greater priority. PMID:15259253

  16. Implementation and value of using a split-plot reader design in a study of digital breast tomosynthesis in a breast cancer assessment clinic

    NASA Astrophysics Data System (ADS)

    Mall, Suneeta; Brennan, Patrick C.; Mello-Thoms, Claudia

    2015-03-01

    The rapid evolution in medical imaging has led to an increased number of recurrent trials, primarily to ensure that the efficacy of new imaging techniques is known. The cost associated with time and resources in conducting such trials is usually high. The recruitment of participants, in a medium to large reader study, is often very challenging as the demanding number of cases discourages involvement with the trial. We aim to evaluate the efficacy of Digital Breast Tomosynthesis (DBT) in a recall assessment clinic in Australia in a prospective multi-reader-multi-case (MRMC) trial. Conducting such a study with the more commonly used fully crossed MRMC study design would require more cases and more cases read per reader, which was not viable in our setting. With an aim to perform a cost effective yet statistically efficient clinical trial, we evaluated alternative study designs, particularly the alternative split-plot MRMC study design and compared and contrasted it with more commonly used fully crossed MRMC study design. Our results suggest that `split-plot', an alternative MRMC study design, could be very beneficial for medium to large clinical trials and the cost associated with conducting such trials can be greatly reduced without adversely effecting the variance of the study. We have also noted an inverse dependency between number of required readers and cases to achieve a target variance. This suggests that split-plot could also be very beneficial for studies that focus on cases that are hard to procure or readers that are hard to recruit. We believe that our results may be relevant to other researchers seeking to design a medium to large clinical trials.

  17. A scalable multi-photon coincidence detector based on superconducting nanowires.

    PubMed

    Zhu, Di; Zhao, Qing-Yuan; Choi, Hyeongrak; Lu, Tsung-Ju; Dane, Andrew E; Englund, Dirk; Berggren, Karl K

    2018-06-04

    Coincidence detection of single photons is crucial in numerous quantum technologies and usually requires multiple time-resolved single-photon detectors. However, the electronic readout becomes a major challenge when the measurement basis scales to large numbers of spatial modes. Here, we address this problem by introducing a two-terminal coincidence detector that enables scalable readout of an array of detector segments based on superconducting nanowire microstrip transmission line. Exploiting timing logic, we demonstrate a sixteen-element detector that resolves all 136 possible single-photon and two-photon coincidence events. We further explore the pulse shapes of the detector output and resolve up to four-photon events in a four-element device, giving the detector photon-number-resolving capability. This new detector architecture and operating scheme will be particularly useful for multi-photon coincidence detection in large-scale photonic integrated circuits.

  18. Evaluating the efficacy of a structure-derived amino acid substitution matrix in detecting protein homologs by BLAST and PSI-BLAST.

    PubMed

    Goonesekere, Nalin Cw

    2009-01-01

    The large numbers of protein sequences generated by whole genome sequencing projects require rapid and accurate methods of annotation. The detection of homology through computational sequence analysis is a powerful tool in determining the complex evolutionary and functional relationships that exist between proteins. Homology search algorithms employ amino acid substitution matrices to detect similarity between proteins sequences. The substitution matrices in common use today are constructed using sequences aligned without reference to protein structure. Here we present amino acid substitution matrices constructed from the alignment of a large number of protein domain structures from the structural classification of proteins (SCOP) database. We show that when incorporated into the homology search algorithms BLAST and PSI-blast, the structure-based substitution matrices enhance the efficacy of detecting remote homologs.

  19. Contribution to terminology internationalization by word alignment in parallel corpora.

    PubMed

    Deléger, Louise; Merkel, Magnus; Zweigenbaum, Pierre

    2006-01-01

    Creating a complete translation of a large vocabulary is a time-consuming task, which requires skilled and knowledgeable medical translators. Our goal is to examine to which extent such a task can be alleviated by a specific natural language processing technique, word alignment in parallel corpora. We experiment with translation from English to French. Build a large corpus of parallel, English-French documents, and automatically align it at the document, sentence and word levels using state-of-the-art alignment methods and tools. Then project English terms from existing controlled vocabularies to the aligned word pairs, and examine the number and quality of the putative French translations obtained thereby. We considered three American vocabularies present in the UMLS with three different translation statuses: the MeSH, SNOMED CT, and the MedlinePlus Health Topics. We obtained several thousand new translations of our input terms, this number being closely linked to the number of terms in the input vocabularies. Our study shows that alignment methods can extract a number of new term translations from large bodies of text with a moderate human reviewing effort, and thus contribute to help a human translator obtain better translation coverage of an input vocabulary. Short-term perspectives include their application to a corpus 20 times larger than that used here, together with more focused methods for term extraction.

  20. Contribution to Terminology Internationalization by Word Alignment in Parallel Corpora

    PubMed Central

    Deléger, Louise; Merkel, Magnus; Zweigenbaum, Pierre

    2006-01-01

    Background and objectives Creating a complete translation of a large vocabulary is a time-consuming task, which requires skilled and knowledgeable medical translators. Our goal is to examine to which extent such a task can be alleviated by a specific natural language processing technique, word alignment in parallel corpora. We experiment with translation from English to French. Methods Build a large corpus of parallel, English-French documents, and automatically align it at the document, sentence and word levels using state-of-the-art alignment methods and tools. Then project English terms from existing controlled vocabularies to the aligned word pairs, and examine the number and quality of the putative French translations obtained thereby. We considered three American vocabularies present in the UMLS with three different translation statuses: the MeSH, SNOMED CT, and the MedlinePlus Health Topics. Results We obtained several thousand new translations of our input terms, this number being closely linked to the number of terms in the input vocabularies. Conclusion Our study shows that alignment methods can extract a number of new term translations from large bodies of text with a moderate human reviewing effort, and thus contribute to help a human translator obtain better translation coverage of an input vocabulary. Short-term perspectives include their application to a corpus 20 times larger than that used here, together with more focused methods for term extraction. PMID:17238328

  1. Medicine information leaflets for non-steroidal anti-inflammatory drugs in Thailand.

    PubMed

    Phueanpinit, Pacharaporn; Pongwecharak, Juraporn; Krska, Janet; Jarernsiripornkul, Narumol

    2016-02-01

    The importance of promoting the use of patient-oriented medicines leaflets is recognized in many countries. Leaflets should include basic information plus specific warnings, and be provided with all medicines, but there is little attempt at enforcement of these requirements in Thailand. To determine content and availability of Thai information leaflets for nonsteroidal anti-inflammatory drugs (NSAIDs). Leaflets for all NSAIDs available for purchase from 34 pharmacies in a large city were evaluated against a checklist and number of leaflets assessed against number of medicine packs available in each pharmacy. Of the 76 leaflets for ten different NSAIDs, 67 (88 %) were for locally manufactured products. Only 22 % of 76 leaflets were sufficient in number for distribution with medicines, while only 4 % had patient-oriented leaflets. No leaflet covered all topics in the checklist. Less than half included safety information, such as contraindications (46 %), precautions (47 %), and adverse drug reactions (34 %). Locally-produced leaflets provided less information than those for originator products and no leaflet included all the warnings required by Thai regulations. This study illustrates the variable availability and quality of NSAID information leaflets. The lack of accessible essential information about medicines in Thailand requires urgent attention to enable patients to minimise adverse reactions.

  2. Scalability improvements to NRLMOL for DFT calculations of large molecules

    NASA Astrophysics Data System (ADS)

    Diaz, Carlos Manuel

    Advances in high performance computing (HPC) have provided a way to treat large, computationally demanding tasks using thousands of processors. With the development of more powerful HPC architectures, the need to create efficient and scalable code has grown more important. Electronic structure calculations are valuable in understanding experimental observations and are routinely used for new materials predictions. For the electronic structure calculations, the memory and computation time are proportional to the number of atoms. Memory requirements for these calculations scale as N2, where N is the number of atoms. While the recent advances in HPC offer platforms with large numbers of cores, the limited amount of memory available on a given node and poor scalability of the electronic structure code hinder their efficient usage of these platforms. This thesis will present some developments to overcome these bottlenecks in order to study large systems. These developments, which are implemented in the NRLMOL electronic structure code, involve the use of sparse matrix storage formats and the use of linear algebra using sparse and distributed matrices. These developments along with other related development now allow ground state density functional calculations using up to 25,000 basis functions and the excited state calculations using up to 17,000 basis functions while utilizing all cores on a node. An example on a light-harvesting triad molecule is described. Finally, future plans to further improve the scalability will be presented.

  3. How many physicians will be needed to provide medical care for older persons? Physician manpower needs for the twenty-first century.

    PubMed

    Reuben, D B; Zwanziger, J; Bradley, T B; Fink, A; Hirsch, S H; Williams, A P; Solomon, D H; Beck, J C

    1993-04-01

    To estimate the number of full-time-equivalent (FTE) physicians and geriatricians needed to provide medical care in the years 2000 to 2030, we developed utilization-based models of need for non-surgical physicians and need for geriatricians. Based on projected utilization, the number of FTE physicians required to care for the elderly will increase two- or threefold over the next 40 years. Alternate economic scenarios have very little effect on estimates of FTE physicians needed but exert large effects on the projected number of FTE geriatricians needed. We conclude that during the years 2000 to 2030, population growth will be the major factor determining the number of physicians needed to provide medicare care; economic forces will have a greater influence on the number of geriatricians needed.

  4. Attentional bias induced by solving simple and complex addition and subtraction problems.

    PubMed

    Masson, Nicolas; Pesenti, Mauro

    2014-01-01

    The processing of numbers has been shown to induce shifts of spatial attention in simple probe detection tasks, with small numbers orienting attention to the left and large numbers to the right side of space. Recently, the investigation of this spatial-numerical association has been extended to mental arithmetic with the hypothesis that solving addition or subtraction problems may induce attentional displacements (to the right and to the left, respectively) along a mental number line onto which the magnitude of the numbers would range from left to right, from small to large numbers. Here we investigated such attentional shifts using a target detection task primed by arithmetic problems in healthy participants. The constituents of the addition and subtraction problems (first operand; operator; second operand) were flashed sequentially in the centre of a screen, then followed by a target on the left or the right side of the screen, which the participants had to detect. This paradigm was employed with arithmetic facts (Experiment 1) and with more complex arithmetic problems (Experiment 2) in order to assess the effects of the operation, the magnitude of the operands, the magnitude of the results, and the presence or absence of a requirement for the participants to carry or borrow numbers. The results showed that arithmetic operations induce some spatial shifts of attention, possibly through a semantic link between the operation and space.

  5. Semi-automatic semantic annotation of PubMed Queries: a study on quality, efficiency, satisfaction

    PubMed Central

    Névéol, Aurélie; Islamaj-Doğan, Rezarta; Lu, Zhiyong

    2010-01-01

    Information processing algorithms require significant amounts of annotated data for training and testing. The availability of such data is often hindered by the complexity and high cost of production. In this paper, we investigate the benefits of a state-of-the-art tool to help with the semantic annotation of a large set of biomedical information queries. Seven annotators were recruited to annotate a set of 10,000 PubMed® queries with 16 biomedical and bibliographic categories. About half of the queries were annotated from scratch, while the other half were automatically pre-annotated and manually corrected. The impact of the automatic pre-annotations was assessed on several aspects of the task: time, number of actions, annotator satisfaction, inter-annotator agreement, quality and number of the resulting annotations. The analysis of annotation results showed that the number of required hand annotations is 28.9% less when using pre-annotated results from automatic tools. As a result, the overall annotation time was substantially lower when pre-annotations were used, while inter-annotator agreement was significantly higher. In addition, there was no statistically significant difference in the semantic distribution or number of annotations produced when pre-annotations were used. The annotated query corpus is freely available to the research community. This study shows that automatic pre-annotations are found helpful by most annotators. Our experience suggests using an automatic tool to assist large-scale manual annotation projects. This helps speed-up the annotation time and improve annotation consistency while maintaining high quality of the final annotations. PMID:21094696

  6. Optical fiber designs for beam shaping

    NASA Astrophysics Data System (ADS)

    Farley, Kevin; Conroy, Michael; Wang, Chih-Hao; Abramczyk, Jaroslaw; Campbell, Stuart; Oulundsen, George; Tankala, Kanishka

    2014-03-01

    A large number of power delivery applications for optical fibers require beams with very specific output intensity profiles; in particular applications that require a focused high intensity beam typically image the near field (NF) intensity distribution at the exit surface of an optical fiber. In this work we discuss optical fiber designs that shape the output beam profile to more closely correspond to what is required in many real world industrial applications. Specifically we present results demonstrating the ability to transform Gaussian beams to shapes required for industrial applications and how that relates to system parameters such as beam product parameter (BPP) values. We report on the how different waveguide structures perform in the NF and show results on how to achieve flat-top with circular outputs.

  7. Impact of the Tumor Microenvironment on Tumor-Infiltrating Lymphocytes: Focus on Breast Cancer

    PubMed Central

    Cohen, Ivan J; Blasberg, Ronald

    2017-01-01

    Immunotherapy is revolutionizing cancer care across disciplines. The original success of immune checkpoint blockade in melanoma has already been translated to Food and Drug Administration–approved therapies in a number of other cancers, and a large number of clinical trials are underway in many other disease types, including breast cancer. Here, we review the basic requirements for a successful antitumor immune response, with a focus on the metabolic and physical barriers encountered by lymphocytes entering breast tumors. We also review recent clinical trials of immunotherapy in breast cancer and provide a number of interesting questions that will need to be answered for successful breast cancer immunotherapy. PMID:28979132

  8. Future-oriented maintenance strategy based on automated processes is finding its way into large astronomical facilities at remote observing sites

    NASA Astrophysics Data System (ADS)

    Silber, Armin; Gonzalez, Christian; Pino, Francisco; Escarate, Patricio; Gairing, Stefan

    2014-08-01

    With expanding sizes and increasing complexity of large astronomical observatories on remote observing sites, the call for an efficient and recourses saving maintenance concept becomes louder. The increasing number of subsystems on telescopes and instruments forces large observatories, like in industries, to rethink conventional maintenance strategies for reaching this demanding goal. The implementation of full-, or semi-automatic processes for standard service activities can help to keep the number of operating staff on an efficient level and to reduce significantly the consumption of valuable consumables or equipment. In this contribution we will demonstrate on the example of the 80 Cryogenic subsystems of the ALMA Front End instrument, how an implemented automatic service process increases the availability of spare parts and Line Replaceable Units. Furthermore how valuable staff recourses can be freed from continuous repetitive maintenance activities, to allow focusing more on system diagnostic tasks, troubleshooting and the interchanging of line replaceable units. The required service activities are decoupled from the day-to-day work, eliminating dependencies on workload peaks or logistic constrains. The automatic refurbishing processes running in parallel to the operational tasks with constant quality and without compromising the performance of the serviced system components. Consequentially that results in an efficiency increase, less down time and keeps the observing schedule on track. Automatic service processes in combination with proactive maintenance concepts are providing the necessary flexibility for the complex operational work structures of large observatories. The gained planning flexibility is allowing an optimization of operational procedures and sequences by considering the required cost efficiency.

  9. An evaluation of HEMT potential for millimeter-wave signal sources using interpolation and harmonic balance techniques

    NASA Technical Reports Server (NTRS)

    Kwon, Youngwoo; Pavlidis, Dimitris; Tutt, Marcel N.

    1991-01-01

    A large-signal analysis method based on an harmonic balance technique and a 2-D cubic spline interpolation function has been developed and applied to the prediction of InP-based HEMT oscillator performance for frequencies extending up to the submillimeter-wave range. The large-signal analysis method uses a limited number of DC and small-signal S-parameter data and allows the accurate characterization of HEMT large-signal behavior. The method has been validated experimentally using load-pull measurement. Oscillation frequency, power performance, and load requirements are discussed, with an operation capability of 300 GHz predicted using state-of-the-art devices (fmax is approximately equal to 450 GHz).

  10. Database Objects vs Files: Evaluation of alternative strategies for managing large remote sensing data

    NASA Astrophysics Data System (ADS)

    Baru, Chaitan; Nandigam, Viswanath; Krishnan, Sriram

    2010-05-01

    Increasingly, the geoscience user community expects modern IT capabilities to be available in service of their research and education activities, including the ability to easily access and process large remote sensing datasets via online portals such as GEON (www.geongrid.org) and OpenTopography (opentopography.org). However, serving such datasets via online data portals presents a number of challenges. In this talk, we will evaluate the pros and cons of alternative storage strategies for management and processing of such datasets using binary large object implementations (BLOBs) in database systems versus implementation in Hadoop files using the Hadoop Distributed File System (HDFS). The storage and I/O requirements for providing online access to large datasets dictate the need for declustering data across multiple disks, for capacity as well as bandwidth and response time performance. This requires partitioning larger files into a set of smaller files, and is accompanied by the concomitant requirement for managing large numbers of file. Storing these sub-files as blobs in a shared-nothing database implemented across a cluster provides the advantage that all the distributed storage management is done by the DBMS. Furthermore, subsetting and processing routines can be implemented as user-defined functions (UDFs) on these blobs and would run in parallel across the set of nodes in the cluster. On the other hand, there are both storage overheads and constraints, and software licensing dependencies created by such an implementation. Another approach is to store the files in an external filesystem with pointers to them from within database tables. The filesystem may be a regular UNIX filesystem, a parallel filesystem, or HDFS. In the HDFS case, HDFS would provide the file management capability, while the subsetting and processing routines would be implemented as Hadoop programs using the MapReduce model. Hadoop and its related software libraries are freely available. Another consideration is the strategy used for partitioning large data collections, and large datasets within collections, using round-robin vs hash partitioning vs range partitioning methods. Each has different characteristics in terms of spatial locality of data and resultant degree of declustering of the computations on the data. Furthermore, we have observed that, in practice, there can be large variations in the frequency of access to different parts of a large data collection and/or dataset, thereby creating "hotspots" in the data. We will evaluate the ability of different approaches for dealing effectively with such hotspots and alternative strategies for dealing with hotspots.

  11. Testing and validation of computerized decision support systems.

    PubMed

    Sailors, R M; East, T D; Wallace, C J; Carlson, D A; Franklin, M A; Heermann, L K; Kinder, A T; Bradshaw, R L; Randolph, A G; Morris, A H

    1996-01-01

    Systematic, through testing of decision support systems (DSSs) prior to release to general users is a critical aspect of high quality software design. Omission of this step may lead to the dangerous, and potentially fatal, condition of relying on a system with outputs of uncertain quality. Thorough testing requires a great deal of effort and is a difficult job because tools necessary to facilitate testing are not well developed. Testing is a job ill-suited to humans because it requires tireless attention to a large number of details. For these reasons, the majority of DSSs available are probably not well tested prior to release. We have successfully implemented a software design and testing plan which has helped us meet our goal of continuously improving the quality of our DSS software prior to release. While requiring large amounts of effort, we feel that the process of documenting and standardizing our testing methods are important steps toward meeting recognized national and international quality standards. Our testing methodology includes both functional and structural testing and requires input from all levels of development. Our system does not focus solely on meeting design requirements but also addresses the robustness of the system and the completeness of testing.

  12. Use of Advanced Solar Cells for Commercial Communication Satellites

    NASA Technical Reports Server (NTRS)

    Bailey, Sheila G.; Landis, Geoffrey A.

    1995-01-01

    The current generation of communications satellites are located primarily in geosynchronous Earth orbit (GEO). Over the next decade, however, a new generation of communications satellites will be built and launched, designed to provide a world-wide interconnection of portable telephones. For this mission, the satellites must be positioned in lower polar and near-polar orbits. To provide complete coverage, large numbers of satellites will be required. Because the required number of satellites decreases as the orbital altitude is increased, fewer satellites would be required if the orbit chosen were raised from low to intermediate orbit. However, in intermediate orbits, satellites encounter significant radiation due to trapped electrons and protons. Radiation tolerant solar cells may be necessary to make such satellites feasible. We analyze the amount of radiation encountered in low and intermediate polar orbits at altitudes of interest to next-generation communication satellites, calculate the expected degradation for silicon, GaAs, and InP solar cells, and show that the lifetimes can be significantly increased by use of advanced solar cells.

  13. Use of advanced solar cells for commerical communication satellites

    NASA Astrophysics Data System (ADS)

    Landis, Geoffrey A.; Bailey, Sheila G.

    1995-01-01

    The current generation of communications satellites are located primarily in geosynchronous Earth orbit (GEO). Over the next decade, however, a new generation of communications satellites will be built and launched, designed to provide a world-wide interconnection of portable telephones. For this mission, the satellites must be positioned in lower polar- and near-polar orbits. To provide complete coverage, large numbers of satellites will be required. Because of the required number of satellites decreases as the orbital altitude is increased, fewer satellites would be required if the orbit chosen were raised from Low to intermediate orbit. However, in intermediate orbits, satellites encounter significant radiation due to trapped electrons and protons. Radiation tolerant solar cells may be necessary to make such satellites feasible. We analyze the amount of radiation encountered in low and intermediate polar orbits at altitudes of interest to next-generation communication satellites, calculate the expected degradation for silicon, GaAs, and InP solar cells, and show that the lifetimes can be significantly increased by use of advanced solar cells.

  14. Use of advanced solar cells for commercial communication satellites

    NASA Astrophysics Data System (ADS)

    Bailey, Sheila G.; Landis, Geoffrey A.

    1995-03-01

    The current generation of communications satellites are located primarily in geosynchronous Earth orbit (GEO). Over the next decade, however, a new generation of communications satellites will be built and launched, designed to provide a world-wide interconnection of portable telephones. For this mission, the satellites must be positioned in lower polar and near-polar orbits. To provide complete coverage, large numbers of satellites will be required. Because the required number of satellites decreases as the orbital altitude is increased, fewer satellites would be required if the orbit chosen were raised from low to intermediate orbit. However, in intermediate orbits, satellites encounter significant radiation due to trapped electrons and protons. Radiation tolerant solar cells may be necessary to make such satellites feasible. We analyze the amount of radiation encountered in low and intermediate polar orbits at altitudes of interest to next-generation communication satellites, calculate the expected degradation for silicon, GaAs, and InP solar cells, and show that the lifetimes can be significantly increased by use of advanced solar cells.

  15. Force-Free Time-Harmonic Plasmoids

    DTIC Science & Technology

    1992-10-01

    effect of currents or vortical motion are absolutely required for stability. What makes the present model attractive is the minimization of the body ...radiative-mode effects may be very fruitful in the future. For example: Rigid non-radiative composite "particles" containing large numbers of fus- able...12 7. The neutral plasma .......... .......................... 12 8. Forces on a moving electron ....... ......... .............. 13 9. Effects of

  16. Reproducing Vulnerability: A Bourdieuian Analysis of Readers Who Struggle in Neoliberal Times

    ERIC Educational Resources Information Center

    Jaeger, Elizabeth L.

    2017-01-01

    The neoliberal agenda promotes education as a route toward success in university and career. However, a neoliberal economy requires large numbers of workers willing to accept low-paying, dead-end jobs. The students most likely to take these jobs are those who have struggled with literacy and so schools must, in Bourdieu's terms, re/produce,…

  17. 7 CFR 457.154 - Processing sweet corn crop insurance provisions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 6 of the Basic Provisions, you must provide a copy of all processor contracts to us on or before the... or cold temperatures that cause an unexpected number of acres over a large producing area to be ready... circumstance or, if an indemnity has been paid, require you to repay it to us with interest at any time acreage...

  18. 7 CFR 457.154 - Processing sweet corn crop insurance provisions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 6 of the Basic Provisions, you must provide a copy of all processor contracts to us on or before the... or cold temperatures that cause an unexpected number of acres over a large producing area to be ready... circumstance or, if an indemnity has been paid, require you to repay it to us with interest at any time acreage...

  19. 7 CFR 457.154 - Processing sweet corn crop insurance provisions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... copy of all processor contracts to us on or before the acreage reporting date. 7. Insured Crop (a) In... or cold temperatures that cause an unexpected number of acres over a large producing area to be ready... circumstance or, if an indemnity has been paid, require you to repay it to us with interest at any time acreage...

  20. Failures and Reform in Mathematics Education: The Case of Engineering. National Institute Briefing Note No. 5.

    ERIC Educational Resources Information Center

    Wolf, Alison

    The structure of education for 16- to 18-year-olds in Great Britain discourages them from making mathematics, science, and engineering serious options for future study. The emerging structure of the labor market, in which a large proportion of high-status jobs do not require higher mathematics, increases the numbers who decide not to commit…

  1. Deployable Debris Shields For Space Station

    NASA Technical Reports Server (NTRS)

    Christiansen, Eric L.; Cour-Palais, Burton G.; Crews, Jeanne

    1993-01-01

    Multilayer shields made of lightweight sheet materials deployed from proposed Space Station Freedom for additional protection against orbiting debris. Deployment mechanism attached at each location on exterior where extra protection needed. Equipment withdraws layer of material from storage in manner similar to unfurling sail or extending window shade. Number of layers deployed depends on required degree of protection, and could be as large as five.

  2. Using Systematic Item Selection Methods to Improve Universal Design of Assessments. Policy Directions. Number 18

    ERIC Educational Resources Information Center

    Johnstone, Christopher; Thurlow, Martha; Moore, Michael; Altman, Jason

    2006-01-01

    The No Child Left Behind Act of 2001 (NCLB) and other recent changes in federal legislation have placed greater emphasis on accountability in large-scale testing. Included in this emphasis are regulations that require assessments to be accessible. States are accountable for the success of all students, and tests should be designed in a way that…

  3. The Education and Care Divide: The Role of the Early Childhood Workforce in 15 European Countries

    ERIC Educational Resources Information Center

    Van Laere, Katrien; Peeters, Jan; Vandenbroeck, Michel

    2012-01-01

    International reports on early childhood education and care tend to attach increasing importance to workforce profiles. Yet a study of 15 European countries reveals that large numbers of (assistant) staff remain invisible in most international reports. As part of the CoRe project (Competence Requirements in Early Childhood Education and Care) we…

  4. Post-classification approaches to estimating change in forest area using remotely sense auxiliary data.

    Treesearch

    Ronald E. McRoberts

    2014-01-01

    Multiple remote sensing-based approaches to estimating gross afforestation, gross deforestation, and net deforestation are possible. However, many of these approaches have severe data requirements in the form of long time series of remotely sensed data and/or large numbers of observations of land cover change to train classifiers and assess the accuracy of...

  5. Do the Timeliness, Regularity, and Intensity of Online Work Habits Predict Academic Performance?

    ERIC Educational Resources Information Center

    Dvorak, Tomas; Jia, Miaoqing

    2016-01-01

    This study analyzes the relationship between students' online work habits and academic performance. We utilize data from logs recorded by a course management system (CMS) in two courses at a small liberal arts college in the U.S. Both courses required the completion of a large number of online assignments. We measure three aspects of students'…

  6. Factors Influencing College Attendance of Appalachian Kentucky Students Participating in a Federal Educational Talent Search Program

    ERIC Educational Resources Information Center

    Bowling, William D.

    2013-01-01

    Post-secondary education is quickly becoming a requirement for many growing careers. Because of this, an increased focused on post-secondary enrollment and attainment has been seen in the education community, particularly in the K-12 systems. To that end a large number of programs and organizations have begun to provide assistance to these…

  7. The DTIC Review: Volume 2, Number 4, Surviving Chemical and Biological Warfare

    DTIC Science & Technology

    1996-12-01

    CHROMATOGRAPHIC ANALYSIS, NUCLEAR MAGNETIC RESONANCE, INFRARED SPECTROSCOPY , ARMY RESEARCH, DEGRADATION, VERIFICATION, MASS SPECTROSCOPY , LIQUID... mycotoxins . Such materials are not attractive as weapons of mass destruction however, as large amounts are required to produce lethal effects. In...VERIFICATION, ATOMIC ABSORPTION SPECTROSCOPY , ATOMIC ABSORPTION. AL The DTIC Review Defense Technical Information Center AD-A285 242 AD-A283 754 EDGEWOOO

  8. Error Pattern Analysis Applied to Technical Writing: An Editor's Guide for Writers.

    ERIC Educational Resources Information Center

    Monagle, E. Brette

    The use of error pattern analysis can reduce the time and money spent on editing and correcting manuscripts. What is required is noting, classifying, and keeping a frequency count of errors. First an editor should take a typical page of writing and circle each error. After the editor has done a sufficiently large number of pages to identify an…

  9. Inundative release of Aphthona spp. flea beetles (Coleoptera: Chrysomlidae) as a biological "herbicide" on leafy spurge in riparian areas

    Treesearch

    R.A. Progar; G. Markin; J. Milan; T. Barbouletos; M.J. Rinella

    2010-01-01

    Inundative releases of beneficial insects are frequently used to suppress pest insects but not commonly attempted as a method of weed biological control because of the difficulty in obtaining the required large numbers of insects. The successful establishment of a flea beetle complex, mixed Aphthona lacertosa (Rosenhauer) and Aphthona...

  10. Identifying Stem-like Cells Using Mitochondrial Membrane Potential | Center for Cancer Research

    Cancer.gov

    Therapies that are based on living cells promise to improve treatments for metastatic cancer and for many degenerative diseases. Lasting treatment of these maladies may require the durable persistence of cells. Long-term engraftment of cells – for months or years – and the generation of large numbers of progeny are characteristics of stem cells. Most approaches to isolate

  11. 36 CFR 1254.92 - How do I submit a request to microfilm records and donated historical materials?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., records preparation, and other NARA requirements in a shorter time frame. (1) You may include in your request only one project to microfilm a complete body of documents, such as an entire series, a major continuous segment of a very large series which is reasonably divisible, or a limited number of separate...

  12. 36 CFR 1254.92 - How do I submit a request to microfilm records and donated historical materials?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., records preparation, and other NARA requirements in a shorter time frame. (1) You may include in your request only one project to microfilm a complete body of documents, such as an entire series, a major continuous segment of a very large series which is reasonably divisible, or a limited number of separate...

  13. Stereoselective aminoacylation of RNA

    NASA Technical Reports Server (NTRS)

    Usher, D. A.; Needels, M. C.; Brenner, T.

    1986-01-01

    Prebiotic chemistry is faced with a major problem: how could a controlled and selective reaction occur, when there is present in the same solution a large number of alternative possible coreactants? This problem is solved in the modern cell by the presence of enzymes, which are not only highly efficient and controllable catalysts, but which also can impose on their substrates a precise structural requirement. However, enzymes are the result of billions of years of evolution, and we cannot invoke them as prebiotic catalysts. One approach to solving this problem in the prebiotic context is to make use of template-directed reactions. These reactions increase the number of structural requirements that must be simultaneously present in a molecule for it to be able to react, and thereby increase the selectivity of the reaction. They also can give a large increase in the rate of a reaction, if the template constrains two potential coreactants to lie close together. A third benefit is that information that is present in the template molecule can be passed on to the product molecules. If the earliest organisms were based on proteins and nucleic acids, then the investigation of peptide synthesis on an oligonucleotide template is highly relevant to the study of the origin of life.

  14. An adaptive Gaussian process-based iterative ensemble smoother for data assimilation

    NASA Astrophysics Data System (ADS)

    Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao

    2018-05-01

    Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.

  15. Simulation of interaction between ground water in an alluvial aquifer and surface water in a large braided river

    USGS Publications Warehouse

    Leake, S.A.; Lilly, M.R.

    1995-01-01

    The Fairbanks, Alaska, area has many contaminated sites in a shallow alluvial aquifer. A ground-water flow model is being developed using the MODFLOW finite-difference ground-water flow model program with the River Package. The modeled area is discretized in the horizontal dimensions into 118 rows and 158 columns of approximately 150-meter square cells. The fine grid spacing has the advantage of providing needed detail at the contaminated sites and surface-water features that bound the aquifer. However, the fine spacing of cells adds difficulty to simulating interaction between the aquifer and the large, braided Tanana River. In particular, the assignment of a river head is difficult if cells are much smaller than the river width. This was solved by developing a procedure for interpolating and extrapolating river head using a river distance function. Another problem is that future transient simulations would require excessive numbers of input records using the current version of the River Package. The proposed solution to this problem is to modify the River Package to linearly interpolate river head for time steps within each stress period, thereby reducing the number of stress periods required.

  16. Toward a 3D video format for auto-stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Vetro, Anthony; Yea, Sehoon; Smolic, Aljoscha

    2008-08-01

    There has been increased momentum recently in the production of 3D content for cinema applications; for the most part, this has been limited to stereo content. There are also a variety of display technologies on the market that support 3DTV, each offering a different viewing experience and having different input requirements. More specifically, stereoscopic displays support stereo content and require glasses, while auto-stereoscopic displays avoid the need for glasses by rendering view-dependent stereo pairs for a multitude of viewing angles. To realize high quality auto-stereoscopic displays, multiple views of the video must either be provided as input to the display, or these views must be created locally at the display. The former approach has difficulties in that the production environment is typically limited to stereo, and transmission bandwidth for a large number of views is not likely to be available. This paper discusses an emerging 3D data format that enables the latter approach to be realized. A new framework for efficiently representing a 3D scene and enabling the reconstruction of an arbitrarily large number of views prior to rendering is introduced. Several design challenges are also highlighted through experimental results.

  17. Experimental measurement of structural power flow on an aircraft fuselage

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.

    1991-01-01

    An experimental technique is used to measure structural intensity through an aircraft fuselage with an excitation load applied near one of the wing attachment locations. The fuselage is a relatively large structure, requiring a large number of measurement locations to analyze the whole of the structure. For the measurement of structural intensity, multiple point measurements are necessary at every location of interest. A tradeoff is therefore required between the number of measurement transducers, the mounting of these transducers, and the accuracy of the measurements. Using four transducers mounted on a bakelite platform, structural intensity vectors are measured at locations distributed throughout the fuselage. To minimize the errors associated with using the four transducer technique, the measurement locations are selected to be away from bulkheads and stiffeners. Furthermore, to eliminate phase errors between the four transducer measurements, two sets of data are collected for each position, with the orientation of the platform with the four transducers rotated by 180 degrees and an average taken between the two sets of data. The results of these measurements together with a discussion of the suitability of the approach for measuring structural intensity on a real structure are presented.

  18. The application of the large particles method of numerical modeling of the process of carbonic nanostructures synthesis in plasma

    NASA Astrophysics Data System (ADS)

    Abramov, G. V.; Gavrilov, A. N.

    2018-03-01

    The article deals with the numerical solution of the mathematical model of the particles motion and interaction in multicomponent plasma by the example of electric arc synthesis of carbon nanostructures. The high order of the particles and the number of their interactions requires a significant input of machine resources and time for calculations. Application of the large particles method makes it possible to reduce the amount of computation and the requirements for hardware resources without affecting the accuracy of numerical calculations. The use of technology of GPGPU parallel computing using the Nvidia CUDA technology allows organizing all General purpose computation on the basis of the graphical processor graphics card. The comparative analysis of different approaches to parallelization of computations to speed up calculations with the choice of the algorithm in which to calculate the accuracy of the solution shared memory is used. Numerical study of the influence of particles density in the macro particle on the motion parameters and the total number of particle collisions in the plasma for different modes of synthesis has been carried out. The rational range of the coherence coefficient of particle in the macro particle is computed.

  19. A Node Linkage Approach for Sequential Pattern Mining

    PubMed Central

    Navarro, Osvaldo; Cumplido, René; Villaseñor-Pineda, Luis; Feregrino-Uribe, Claudia; Carrasco-Ochoa, Jesús Ariel

    2014-01-01

    Sequential Pattern Mining is a widely addressed problem in data mining, with applications such as analyzing Web usage, examining purchase behavior, and text mining, among others. Nevertheless, with the dramatic increase in data volume, the current approaches prove inefficient when dealing with large input datasets, a large number of different symbols and low minimum supports. In this paper, we propose a new sequential pattern mining algorithm, which follows a pattern-growth scheme to discover sequential patterns. Unlike most pattern growth algorithms, our approach does not build a data structure to represent the input dataset, but instead accesses the required sequences through pseudo-projection databases, achieving better runtime and reducing memory requirements. Our algorithm traverses the search space in a depth-first fashion and only preserves in memory a pattern node linkage and the pseudo-projections required for the branch being explored at the time. Experimental results show that our new approach, the Node Linkage Depth-First Traversal algorithm (NLDFT), has better performance and scalability in comparison with state of the art algorithms. PMID:24933123

  20. Rucio, the next-generation Data Management system in ATLAS

    NASA Astrophysics Data System (ADS)

    Serfon, C.; Barisits, M.; Beermann, T.; Garonne, V.; Goossens, L.; Lassnig, M.; Nairz, A.; Vigne, R.; ATLAS Collaboration

    2016-04-01

    Rucio is the next-generation of Distributed Data Management (DDM) system benefiting from recent advances in cloud and ;Big Data; computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quixote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 160 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio addresses these issues by relying on new technologies to ensure system scalability, cover new user requirements and employ new automation framework to reduce operational overheads. This paper shows the key concepts of Rucio, details the Rucio design, and the technology it employs, the tests that were conducted to validate it and finally describes the migration steps that were conducted to move from DQ2 to Rucio.

  1. Do rational numbers play a role in selection for stochasticity?

    PubMed

    Sinclair, Robert

    2014-01-01

    When a given tissue must, to be able to perform its various functions, consist of different cell types, each fairly evenly distributed and with specific probabilities, then there are at least two quite different developmental mechanisms which might achieve the desired result. Let us begin with the case of two cell types, and first imagine that the proportion of numbers of cells of these types should be 1:3. Clearly, a regular structure composed of repeating units of four cells, three of which are of the dominant type, will easily satisfy the requirements, and a deterministic mechanism may lend itself to the task. What if, however, the proportion should be 10:33? The same simple, deterministic approach would now require a structure of repeating units of 43 cells, and this certainly seems to require a far more complex and potentially prohibitive deterministic developmental program. Stochastic development, replacing regular units with random distributions of given densities, might not be evolutionarily competitive in comparison with the deterministic program when the proportions should be 1:3, but it has the property that, whatever developmental mechanism underlies it, its complexity does not need to depend very much upon target cell densities at all. We are immediately led to speculate that proportions which correspond to fractions with large denominators (such as the 33 of 10/33) may be more easily achieved by stochastic developmental programs than by deterministic ones, and this is the core of our thesis: that stochastic development may tend to occur more often in cases involving rational numbers with large denominators. To be imprecise: that simple rationality and determinism belong together, as do irrationality and randomness.

  2. Modeling the impact of changing patient transportation systems on peri-operative process performance in a large hospital: insights from a computer simulation study.

    PubMed

    Segev, Danny; Levi, Retsef; Dunn, Peter F; Sandberg, Warren S

    2012-06-01

    Transportation of patients is a key hospital operational activity. During a large construction project, our patient admission and prep area will relocate from immediately adjacent to the operating room suite to another floor of a different building. Transportation will require extra distance and elevator trips to deliver patients and recycle transporters (specifically: personnel who transport patients). Management intuition suggested that starting all 52 first cases simultaneously would require many of the 18 available elevators. To test this, we developed a data-driven simulation tool to allow decision makers to simultaneously address planning and evaluation questions about patient transportation. We coded a stochastic simulation tool for a generalized model treating all factors contributing to the process as JAVA objects. The model includes elevator steps, explicitly accounting for transporter speed and distance to be covered. We used the model for sensitivity analyses of the number of dedicated elevators, dedicated transporters, transporter speed and the planned process start time on lateness of OR starts and the number of cases with serious delays (i.e., more than 15 min). Allocating two of the 18 elevators and 7 transporters reduced lateness and the number of cases with serious delays. Additional elevators and/or transporters yielded little additional benefit. If the admission process produced ready-for-transport patients 20 min earlier, almost all delays would be eliminated. Modeling results contradicted clinical managers' intuition that starting all first cases on time requires many dedicated elevators. This is explained by the principle of decreasing marginal returns for increasing capacity when there are other limiting constraints in the system.

  3. Challenges in Developing Models Describing Complex Soil Systems

    NASA Astrophysics Data System (ADS)

    Simunek, J.; Jacques, D.

    2014-12-01

    Quantitative mechanistic models that consider basic physical, mechanical, chemical, and biological processes have the potential to be powerful tools to integrate our understanding of complex soil systems, and the soil science community has often called for models that would include a large number of these diverse processes. However, once attempts have been made to develop such models, the response from the community has not always been overwhelming, especially after it discovered that these models are consequently highly complex, requiring not only a large number of parameters, not all of which can be easily (or at all) measured and/or identified, and which are often associated with large uncertainties, but also requiring from their users deep knowledge of all/most of these implemented physical, mechanical, chemical and biological processes. Real, or perceived, complexity of these models then discourages users from using them even for relatively simple applications, for which they would be perfectly adequate. Due to the nonlinear nature and chemical/biological complexity of the soil systems, it is also virtually impossible to verify these types of models analytically, raising doubts about their applicability. Code inter-comparisons, which is then likely the most suitable method to assess code capabilities and model performance, requires existence of multiple models of similar/overlapping capabilities, which may not always exist. It is thus a challenge not only to developed models describing complex soil systems, but also to persuade the soil science community in using them. As a result, complex quantitative mechanistic models are still an underutilized tool in soil science research. We will demonstrate some of the challenges discussed above on our own efforts in developing quantitative mechanistic models (such as HP1/2) for complex soil systems.

  4. Hyponatremia and fractures: should hyponatremia be further studied as a potential biochemical risk factor to be included in FRAX algorithms?

    PubMed

    Ayus, J C; Bellido, T; Negri, A L

    2017-05-01

    The Fracture Risk Assessment Tool (FRAX®) was developed by the WHO Collaborating Centre for metabolic bone diseases to evaluate fracture risk of patients. It is based on patient models that integrate the risk associated with clinical variables and bone mineral density (BMD) at the femoral neck. The clinical risk factors included in FRAX were chosen to include only well-established and independent variables related to skeletal fracture risk. The FRAX tool has acquired worldwide acceptance despite having several limitations. FRAX models have not included biochemical derangements in estimation of fracture risk due to the lack of validation in large prospective studies. Recently, there has been an increasing number of studies showing a relationship between hyponatremia and the occurrence of fractures. Hyponatremia is the most frequent electrolyte abnormality measured in the clinic, and serum sodium concentration is a very reproducible, affordable, and readily obtainable measurement. Thus, we think that hyponatremia should be further studied as a biochemical risk factor for skeletal fractures prediction, particularly those at the hip which carries the greatest morbidity and mortality. To achieve this will require the collection of large patient cohorts from diverse geographical locations that include a measure of serum sodium in addition to the other FRAX variables in large numbers, in both sexes, over a wide age range and with wide geographical representation. It would also require the inclusion of data on duration and severity of hyponatremia. Information will be required both on the risk of fracture associated with the occurrence and length of exposure to hyponatremia and to the relationship with the other risk variables included in FRAX and also the independent effect on the occurrence of death which is increased by hyponatremia.

  5. E-ELT requirements management

    NASA Astrophysics Data System (ADS)

    Schneller, D.

    2014-08-01

    The E-ELT has completed its design phase and is now entering construction. ESO is acting as prime contractor and usually procures subsystems, including their design, from industry. This, in turn, leads to a large number of requirements, whose validity, consistency and conformity with user needs requires extensive management. Therefore E-ELT Systems Engineering has chosen to follow a systematic approach, based on a reasoned requirement architecture that follows the product breakdown structure of the observatory. The challenge ahead is the controlled flow-down of science user needs into engineering requirements, requirement specifications and system design documents. This paper shows how the E-ELT project manages this. The project has adopted IBM DOORTM as a supporting requirements management tool. This paper deals with emerging problems and pictures potential solutions. It shows trade-offs made to reach a proper balance between the effort put in this activity and potential overheads, and the benefit for the project.

  6. Exploration Planetary Surface Structural Systems: Design Requirements and Compliance

    NASA Technical Reports Server (NTRS)

    Dorsey, John T.

    2011-01-01

    The Lunar Surface Systems Project developed system concepts that would be necessary to establish and maintain a permanent human presence on the Lunar surface. A variety of specific system implementations were generated as a part of the scenarios, some level of system definition was completed, and masses estimated for each system. Because the architecture studies generally spawned a large number of system concepts and the studies were executed in a short amount of time, the resulting system definitions had very low design fidelity. This paper describes the development sequence required to field a particular structural system: 1) Define Requirements, 2) Develop the Design and 3) Demonstrate Compliance of the Design to all Requirements. This paper also outlines and describes in detail the information and data that are required to establish structural design requirements and outlines the information that would comprise a planetary surface system Structures Requirements document.

  7. Extremes in Otolaryngology Resident Surgical Case Numbers: An Update.

    PubMed

    Baugh, Tiffany P; Franzese, Christine B

    2017-06-01

    Objectives The purpose of this study is to examine the effect of minimum case numbers on otolaryngology resident case log data and understand differences in minimum, mean, and maximum among certain procedures as a follow-up to a prior study. Study Design Cross-sectional survey using a national database. Setting Academic otolaryngology residency programs. Subjects and Methods Review of otolaryngology resident national data reports from the Accreditation Council for Graduate Medical Education (ACGME) resident case log system performed from 2004 to 2015. Minimum, mean, standard deviation, and maximum values for total number of supervisor and resident surgeon cases and for specific surgical procedures were compared. Results The mean total number of resident surgeon cases for residents graduating from 2011 to 2015 ranged from 1833.3 ± 484 in 2011 to 2072.3 ± 548 in 2014. The minimum total number of cases ranged from 826 in 2014 to 1004 in 2015. The maximum total number of cases increased from 3545 in 2011 to 4580 in 2015. Multiple key indicator procedures had less than the required minimum reported in 2015. Conclusion Despite the ACGME instituting required minimum numbers for key indicator procedures, residents have graduated without meeting these minimums. Furthermore, there continues to be large variations in the minimum, mean, and maximum numbers for many procedures. Variation among resident case numbers is likely multifactorial. Ensuring proper instruction on coding and case role as well as emphasizing frequent logging by residents will ensure programs have the most accurate data to evaluate their case volume.

  8. AIR Instrument Array

    NASA Technical Reports Server (NTRS)

    Jones, I. W.; Wilson, J. W.; Maiden, D. L.; Goldhagen, P.; Shinn, J. L.

    2003-01-01

    The large number of radiation types composing the atmospheric radiation requires a complicated combination of instrument types to fully characterize the environment. A completely satisfactory combination has not as yet been flown and would require a large capital outlay to develop. In that the funds of the current project were limited to essential integration costs, an international collaboration was formed with partners from six countries and fourteen different institutions with their own financial support for their participation. Instruments were chosen to cover sensitivity to all radiation types with enough differential sensitivity to separate individual components. Some instruments were chosen as important to specify the physical field component and other instruments were chosen on the basis that they could be useful in dosimetric evaluation. In the present paper we will discuss the final experimental flight package for the ER-2 flight campaign.

  9. Traffic shaping and scheduling for OBS-based IP/WDM backbones

    NASA Astrophysics Data System (ADS)

    Elhaddad, Mahmoud S.; Melhem, Rami G.; Znati, Taieb; Basak, Debashis

    2003-10-01

    We introduce Proactive Reservation-based Switching (PRS) -- a switching architecture for IP/WDM networks based on Labeled Optical Burst Switching. PRS achieves packet delay and loss performance comparable to that of packet-switched networks, without requiring large buffering capacity, or burst scheduling across a large number of wavelengths at the core routers. PRS combines proactive channel reservation with periodic shaping of ingress-egress traffic aggregates to hide the offset latency and approximate the utilization/buffering characteristics of discrete-time queues with periodic arrival streams. A channel scheduling algorithm imposes constraints on burst departure times to ensure efficient utilization of wavelength channels and to maintain the distance between consecutive bursts through the network. Results obtained from simulation using TCP traffic over carefully designed topologies indicate that PRS consistently achieves channel utilization above 90% with modest buffering requirements.

  10. NMR methods for metabolomics of mammalian cell culture bioreactors.

    PubMed

    Aranibar, Nelly; Reily, Michael D

    2014-01-01

    Metabolomics has become an important tool for measuring pools of small molecules in mammalian cell cultures expressing therapeutic proteins. NMR spectroscopy has played an important role, largely because it requires minimal sample preparation, does not require chromatographic separation, and is quantitative. The concentrations of large numbers of small molecules in the extracellular media or within the cells themselves can be measured directly on the culture supernatant and on the supernatant of the lysed cells, respectively, and correlated with endpoints such as titer, cell viability, or glycosylation patterns. The observed changes can be used to generate hypotheses by which these parameters can be optimized. This chapter focuses on the sample preparation, data acquisition, and analysis to get the most out of NMR metabolomics data from CHO cell cultures but could easily be extended to other in vitro culture systems.

  11. A Primer on Infectious Disease Bacterial Genomics

    PubMed Central

    Petkau, Aaron; Knox, Natalie; Graham, Morag; Van Domselaar, Gary

    2016-01-01

    SUMMARY The number of large-scale genomics projects is increasing due to the availability of affordable high-throughput sequencing (HTS) technologies. The use of HTS for bacterial infectious disease research is attractive because one whole-genome sequencing (WGS) run can replace multiple assays for bacterial typing, molecular epidemiology investigations, and more in-depth pathogenomic studies. The computational resources and bioinformatics expertise required to accommodate and analyze the large amounts of data pose new challenges for researchers embarking on genomics projects for the first time. Here, we present a comprehensive overview of a bacterial genomics projects from beginning to end, with a particular focus on the planning and computational requirements for HTS data, and provide a general understanding of the analytical concepts to develop a workflow that will meet the objectives and goals of HTS projects. PMID:28590251

  12. Effect of steady and time-harmonic magnetic fields on macrosegragation in alloy solidification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Incropera, F.P.; Prescott, P.J.

    Buoyancy-induced convection during the solidification of alloys can contribute significantly to the redistribution of alloy constituents, thereby creating large composition gradients in the final ingot. Termed macrosegregation, the condition diminishes the quality of the casting and, in the extreme, may require that the casting be remelted. The deleterious effects of buoyancy-driven flows may be suppressed through application of an external magnetic field, and in this study the effects of both steady and time-harmonic fields have been considered. For a steady magnetic field, extremely large field strengths would be required to effectively dampen convection patterns that contribute to macrosegregation. However, bymore » reducing spatial variations in temperature and composition, turbulent mixing induced by a time-harmonic field reduces the number and severity of segregates in the final casting.« less

  13. Linear Approximation SAR Azimuth Processing Study

    NASA Technical Reports Server (NTRS)

    Lindquist, R. B.; Masnaghetti, R. K.; Belland, E.; Hance, H. V.; Weis, W. G.

    1979-01-01

    A segmented linear approximation of the quadratic phase function that is used to focus the synthetic antenna of a SAR was studied. Ideal focusing, using a quadratic varying phase focusing function during the time radar target histories are gathered, requires a large number of complex multiplications. These can be largely eliminated by using linear approximation techniques. The result is a reduced processor size and chip count relative to ideally focussed processing and a correspondingly increased feasibility for spaceworthy implementation. A preliminary design and sizing for a spaceworthy linear approximation SAR azimuth processor meeting requirements similar to those of the SEASAT-A SAR was developed. The study resulted in a design with approximately 1500 IC's, 1.2 cubic feet of volume, and 350 watts of power for a single look, 4000 range cell azimuth processor with 25 meters resolution.

  14. Matlab Based LOCO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Portmann, Greg; /LBL, Berkeley; Safranek, James

    The LOCO algorithm has been used by many accelerators around the world. Although the uses for LOCO vary, the most common use has been to find calibration errors and correct the optics functions. The light source community in particular has made extensive use of the LOCO algorithms to tightly control the beta function and coupling. Maintaining high quality beam parameters requires constant attention so a relatively large effort was put into software development for the LOCO application. The LOCO code was originally written in FORTRAN. This code worked fine but it was somewhat awkward to use. For instance, the FORTRANmore » code itself did not calculate the model response matrix. It required a separate modeling code such as MAD to calculate the model matrix then one manually loads the data into the LOCO code. As the number of people interested in LOCO grew, it required making it easier to use. The decision to port LOCO to Matlab was relatively easy. It's best to use a matrix programming language with good graphics capability; Matlab was also being used for high level machine control; and the accelerator modeling code AT, [5], was already developed for Matlab. Since LOCO requires collecting and processing a relative large amount of data, it is very helpful to have the LOCO code compatible with the high level machine control, [3]. A number of new features were added while porting the code from FORTRAN and new methods continue to evolve, [7][9]. Although Matlab LOCO was written with AT as the underlying tracking code, a mechanism to connect to other modeling codes has been provided.« less

  15. Using a framework to implement large-scale innovation in medical education with the intent of achieving sustainability.

    PubMed

    Hudson, Judith N; Farmer, Elizabeth A; Weston, Kathryn M; Bushnell, John A

    2015-01-16

    Particularly when undertaken on a large scale, implementing innovation in higher education poses many challenges. Sustaining the innovation requires early adoption of a coherent implementation strategy. Using an example from clinical education, this article describes a process used to implement a large-scale innovation with the intent of achieving sustainability. Desire to improve the effectiveness of undergraduate medical education has led to growing support for a longitudinal integrated clerkship (LIC) model. This involves a move away from the traditional clerkship of 'block rotations' with frequent changes in disciplines, to a focus upon clerkships with longer duration and opportunity for students to build sustained relationships with supervisors, mentors, colleagues and patients. A growing number of medical schools have adopted the LIC model for a small percentage of their students. At a time when increasing medical school numbers and class sizes are leading to competition for clinical supervisors it is however a daunting challenge to provide a longitudinal clerkship for an entire medical school class. This challenge is presented to illustrate the strategy used to implement sustainable large scale innovation. A strategy to implement and build a sustainable longitudinal integrated community-based clerkship experience for all students was derived from a framework arising from Roberto and Levesque's research in business. The framework's four core processes: chartering, learning, mobilising and realigning, provided guidance in preparing and rolling out the 'whole of class' innovation. Roberto and Levesque's framework proved useful for identifying the foundations of the implementation strategy, with special emphasis on the relationship building required to implement such an ambitious initiative. Although this was innovation in a new School it required change within the school, wider university and health community. Challenges encountered included some resistance to moving away from traditional hospital-centred education, initial student concern, resource limitations, workforce shortage and potential burnout of the innovators. Large-scale innovations in medical education may productively draw upon research from other disciplines for guidance on how to lay the foundations for successfully achieving sustainability.

  16. Global Performance Characterization of the Three Burn Trans-Earth Injection Maneuver Sequence over the Lunar Nodal Cycle

    NASA Technical Reports Server (NTRS)

    Williams, Jacob; Davis, Elizabeth C.; Lee, David E.; Condon, Gerald L.; Dawn, Tim

    2009-01-01

    The Orion spacecraft will be required to perform a three-burn trans-Earth injection (TEI) maneuver sequence to return to Earth from low lunar orbit. The origin of this approach lies in the Constellation Program requirements for access to any lunar landing site location combined with anytime lunar departure. This paper documents the development of optimized databases used to rapidly model the performance requirements of the TEI three-burn sequence for an extremely large number of mission cases. It also discusses performance results for lunar departures covering a complete 18.6 year lunar nodal cycle as well as general characteristics of the optimized three-burn TEI sequence.

  17. Biologically-Inspired Concepts for Self-Management of Complexity

    NASA Technical Reports Server (NTRS)

    Sterritt, Roy; Hinchey, G.

    2006-01-01

    Inherent complexity in large-scale applications may be impossible to eliminate or even ameliorate despite a number of promising advances. In such cases, the complexity must be tolerated and managed. Such management may be beyond the abilities of humans, or require such overhead as to make management by humans unrealistic. A number of initiatives inspired by concepts in biology have arisen for self-management of complex systems. We present some ideas and techniques we have been experimenting with, inspired by lesser-known concepts in biology that show promise in protecting complex systems and represent a step towards self-management of complexity.

  18. Practice does make perfect. A longitudinal look at repeated taste exposure.

    PubMed

    Williams, Keith E; Paul, Candace; Pizzo, Bianca; Riegel, Katherine

    2008-11-01

    Previous research has found that 10-15 exposures to a novel food found can increase liking and consumption. This research has been, however, largely limited cross-sectional studies in which participants are offered only one or a few novel foods. The goal of the current study uses a small clinical sample to demonstrate the number of exposures required for consumption of novel foods decreases as a greater number of foods are added to the diet. Evidence that fewer exposures are needed over time may make interventions based upon repeated exposure more acceptable to parents and clinicians.

  19. Environmental Requirements Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cusack, Laura J.; Bramson, Jeffrey E.; Archuleta, Jose A.

    2015-01-08

    CH2M HILL Plateau Remediation Company (CH2M HILL) is the U.S. Department of Energy (DOE) prime contractor responsible for the environmental cleanup of the Hanford Site Central Plateau. As part of this responsibility, the CH2M HILL is faced with the task of complying with thousands of environmental requirements which originate from over 200 federal, state, and local laws and regulations, DOE Orders, waste management and effluent discharge permits, Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) response and Resource Conservation and Recovery Act (RCRA) corrective action documents, and official regulatory agency correspondence. The challenge is to manage this vast number ofmore » requirements to ensure they are appropriately and effectively integrated into CH2M HILL operations. Ensuring compliance with a large number of environmental requirements relies on an organization’s ability to identify, evaluate, communicate, and verify those requirements. To ensure that compliance is maintained, all changes need to be tracked. The CH2M HILL identified that the existing system used to manage environmental requirements was difficult to maintain and that improvements should be made to increase functionality. CH2M HILL established an environmental requirements management procedure and tools to assure that all environmental requirements are effectively and efficiently managed. Having a complete and accurate set of environmental requirements applicable to CH2M HILL operations will promote a more efficient approach to: • Communicating requirements • Planning work • Maintaining work controls • Maintaining compliance« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dragone, A; /SLAC; Pratte, J.F.

    An ASIC for the readout of signals from X-ray Active Matrix Pixel Sensor (XAMPS) detectors to be used at the Linac Coherent Light Source (LCLS) is presented. The X-ray Pump Probe (XPP) instrument, for which the ASIC has been designed, requires a large input dynamic range on the order of 104 photons at 8 keV with a resolution of half a photon FWHM. Due to the size of the pixel and the length of the readout line, large input capacitance is expected, leading to stringent requirement on the noise optimization. Furthermore, the large number of pixels needed for a goodmore » position resolution and the fixed LCLS beam period impose limitations on the time available for the single pixel readout. Considering the periodic nature of the LCLS beam, the ASIC developed for this application is a time-variant system providing low-noise charge integration, filtering and correlated double sampling. In order to cope with the large input dynamic range a charge pump scheme implementing a zero-balance measurement method has been introduced. It provides an on chip 3-bit coarse digital conversion of the integrated charge. The residual charge is sampled using correlated double sampling into analog memory and measured with the required resolution. The first 64 channel prototype of the ASIC has been fabricated in TSMC CMOS 0.25 {micro}m technology. In this paper, the ASIC architecture and performances are presented.« less

  1. Using next-generation sequencing for high resolution multiplex analysis of copy number variation from nanogram quantities of DNA from formalin-fixed paraffin-embedded specimens.

    PubMed

    Wood, Henry M; Belvedere, Ornella; Conway, Caroline; Daly, Catherine; Chalkley, Rebecca; Bickerdike, Melissa; McKinley, Claire; Egan, Phil; Ross, Lisa; Hayward, Bruce; Morgan, Joanne; Davidson, Leslie; MacLennan, Ken; Ong, Thian K; Papagiannopoulos, Kostas; Cook, Ian; Adams, David J; Taylor, Graham R; Rabbitts, Pamela

    2010-08-01

    The use of next-generation sequencing technologies to produce genomic copy number data has recently been described. Most approaches, however, reply on optimal starting DNA, and are therefore unsuitable for the analysis of formalin-fixed paraffin-embedded (FFPE) samples, which largely precludes the analysis of many tumour series. We have sought to challenge the limits of this technique with regards to quality and quantity of starting material and the depth of sequencing required. We confirm that the technique can be used to interrogate DNA from cell lines, fresh frozen material and FFPE samples to assess copy number variation. We show that as little as 5 ng of DNA is needed to generate a copy number karyogram, and follow this up with data from a series of FFPE biopsies and surgical samples. We have used various levels of sample multiplexing to demonstrate the adjustable resolution of the methodology, depending on the number of samples and available resources. We also demonstrate reproducibility by use of replicate samples and comparison with microarray-based comparative genomic hybridization (aCGH) and digital PCR. This technique can be valuable in both the analysis of routine diagnostic samples and in examining large repositories of fixed archival material.

  2. Organizational structure and communication networks in a university environment

    NASA Astrophysics Data System (ADS)

    Mathiesen, Joachim; Jamtveit, Bjørn; Sneppen, Kim

    2010-07-01

    The “six degrees of separation” between any two individuals on Earth has become emblematic of the “small world” theme, even though the information conveyed via a chain of human encounters decays very rapidly with increasing chain length, and diffusion of information via this process may be very inefficient in large human organizations. The information flow on a communication network in a large organization, the University of Oslo, has been studied by analyzing email records. The records allow for quantification of communication intensity across organizational levels and between organizational units (referred to as “modules”). We find that the number of email messages within modules scales with module size to the power of 1.29±.06 , and the frequency of communication between individuals decays exponentially with the number of links required upward in the organizational hierarchy before they are connected. Our data also indicates that the number of messages sent by administrative units is proportional to the number of individuals at lower levels in the administrative hierarchy, and the “divergence of information” within modules is associated with this linear relationship. The observed scaling is consistent with a hierarchical system in which individuals far apart in the organization interact little with each other and receive a disproportionate number of messages from higher levels in the administrative hierarchy.

  3. Pulsed beam of extremely large helium droplets

    NASA Astrophysics Data System (ADS)

    Kuma, Susumu; Azuma, Toshiyuki

    2017-12-01

    We generated a pulsed helium droplet beam with average droplet diameters of up to 2 μ m using a solenoid pulsed valve operated at temperatures as low as 7 K. The droplet diameter was controllable over two orders of magnitude, or six orders of the number of atoms per droplet, by lowering the valve temperature from 21 to 7 K. A sudden droplet size change attributed to the so-called ;supercritical expansion; was firstly observed in pulsed mode, which is necessary to obtain the micrometer-scale droplets. This beam source is beneficial for experiments that require extremely large helium droplets in intense, pulsed form.

  4. Upper bounds on asymmetric dark matter self annihilation cross sections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellwanger, Ulrich; Mitropoulos, Pantelis, E-mail: ulrich.ellwanger@th.u-psud.fr, E-mail: pantelis.mitropoulos@th.u-psud.fr

    2012-07-01

    Most models for asymmetric dark matter allow for dark matter self annihilation processes, which can wash out the asymmetry at temperatures near and below the dark matter mass. We study the coupled set of Boltzmann equations for the symmetric and antisymmetric dark matter number densities, and derive conditions applicable to a large class of models for the absence of a significant wash-out of an asymmetry. These constraints are applied to various existing scenarios. In the case of left- or right-handed sneutrinos, very large electroweak gaugino masses, or very small mixing angles are required.

  5. Methods comparison for microsatellite marker development: Different isolation methods, different yield efficiency

    NASA Astrophysics Data System (ADS)

    Zhan, Aibin; Bao, Zhenmin; Hu, Xiaoli; Lu, Wei; Hu, Jingjie

    2009-06-01

    Microsatellite markers have become one kind of the most important molecular tools used in various researches. A large number of microsatellite markers are required for the whole genome survey in the fields of molecular ecology, quantitative genetics and genomics. Therefore, it is extremely necessary to select several versatile, low-cost, efficient and time- and labor-saving methods to develop a large panel of microsatellite markers. In this study, we used Zhikong scallop ( Chlamys farreri) as the target species to compare the efficiency of the five methods derived from three strategies for microsatellite marker development. The results showed that the strategy of constructing small insert genomic DNA library resulted in poor efficiency, while the microsatellite-enriched strategy highly improved the isolation efficiency. Although the mining public database strategy is time- and cost-saving, it is difficult to obtain a large number of microsatellite markers, mainly due to the limited sequence data of non-model species deposited in public databases. Based on the results in this study, we recommend two methods, microsatellite-enriched library construction method and FIASCO-colony hybridization method, for large-scale microsatellite marker development. Both methods were derived from the microsatellite-enriched strategy. The experimental results obtained from Zhikong scallop also provide the reference for microsatellite marker development in other species with large genomes.

  6. A Genetic Algorithm for Flow Shop Scheduling with Assembly Operations to Minimize Makespan

    NASA Astrophysics Data System (ADS)

    Bhongade, A. S.; Khodke, P. M.

    2014-04-01

    Manufacturing systems, in which, several parts are processed through machining workstations and later assembled to form final products, is common. Though scheduling of such problems are solved using heuristics, available solution approaches can provide solution for only moderate sized problems due to large computation time required. In this work, scheduling approach is developed for such flow-shop manufacturing system having machining workstations followed by assembly workstations. The initial schedule is generated using Disjunctive method and genetic algorithm (GA) is applied further for generating schedule for large sized problems. GA is found to give near optimal solution based on the deviation of makespan from lower bound. The lower bound of makespan of such problem is estimated and percent deviation of makespan from lower bounds is used as a performance measure to evaluate the schedules. Computational experiments are conducted on problems developed using fractional factorial orthogonal array, varying the number of parts per product, number of products, and number of workstations (ranging upto 1,520 number of operations). A statistical analysis indicated the significance of all the three factors considered. It is concluded that GA method can obtain optimal makespan.

  7. The QSE-Reduced Nuclear Reaction Network for Silicon Burning

    NASA Astrophysics Data System (ADS)

    Hix, W. Raphael; Parete-Koon, Suzanne T.; Freiburghaus, Christian; Thielemann, Friedrich-Karl

    2007-09-01

    Iron and neighboring nuclei are formed in massive stars shortly before core collapse and during their supernova outbursts, as well as during thermonuclear supernovae. Complete and incomplete silicon burning are responsible for the production of a wide range of nuclei with atomic mass numbers from 28 to 64. Because of the large number of nuclei involved, accurate modeling of silicon burning is computationally expensive. However, examination of the physics of silicon burning has revealed that the nuclear evolution is dominated by large groups of nuclei in mutual equilibrium. We present a new hybrid equilibrium-network scheme which takes advantage of this quasi-equilibrium in order to reduce the number of independent variables calculated. This allows accurate prediction of the nuclear abundance evolution, deleptonization, and energy generation at a greatly reduced computational cost when compared to a conventional nuclear reaction network. During silicon burning, the resultant QSE-reduced network is approximately an order of magnitude faster than the full network it replaces and requires the tracking of less than a third as many abundance variables, without significant loss of accuracy. These reductions in computational cost and the number of species evolved make QSE-reduced networks well suited for inclusion within hydrodynamic simulations, particularly in multidimensional applications.

  8. A statistical approach to detection of copy number variations in PCR-enriched targeted sequencing data.

    PubMed

    Demidov, German; Simakova, Tamara; Vnuchkova, Julia; Bragin, Anton

    2016-10-22

    Multiplex polymerase chain reaction (PCR) is a common enrichment technique for targeted massive parallel sequencing (MPS) protocols. MPS is widely used in biomedical research and clinical diagnostics as the fast and accurate tool for the detection of short genetic variations. However, identification of larger variations such as structure variants and copy number variations (CNV) is still being a challenge for targeted MPS. Some approaches and tools for structural variants detection were proposed, but they have limitations and often require datasets of certain type, size and expected number of amplicons affected by CNVs. In the paper, we describe novel algorithm for high-resolution germinal CNV detection in the PCR-enriched targeted sequencing data and present accompanying tool. We have developed a machine learning algorithm for the detection of large duplications and deletions in the targeted sequencing data generated with PCR-based enrichment step. We have performed verification studies and established the algorithm's sensitivity and specificity. We have compared developed tool with other available methods applicable for the described data and revealed its higher performance. We showed that our method has high specificity and sensitivity for high-resolution copy number detection in targeted sequencing data using large cohort of samples.

  9. Weak-signal Phase Calibration Strategies for Large DSN Arrays

    NASA Technical Reports Server (NTRS)

    Jones, Dayton L.

    2005-01-01

    The NASA Deep Space Network (DSN) is studying arrays of large numbers of small, mass-produced radio antennas as a cost-effective way to increase downlink sensitivity and data rates for future missions. An important issue for the operation of large arrays is the accuracy with which signals from hundreds of small antennas can be combined. This is particularly true at Ka band (32 GHz) where atmospheric phase variations can be large and rapidly changing. A number of algorithms exist to correct the phases of signals from individual antennas in the case where a spacecraft signal provides a useful signal-to-noise ratio (SNR) on time scales shorter than the atmospheric coherence time. However, for very weak spacecraft signals it will be necessary to rely on background natural radio sources to maintain array phasing. Very weak signals could result from a spacecraft emergency or by design, such as direct-to-Earth data transmissions from distant planetary atmospheric or surface probes using only low gain antennas. This paper considers the parameter space where external real-time phase calibration will be necessary, and what this requires in terms of array configuration and signal processing. The inherent limitations of this technique are also discussed.

  10. Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel

    2004-01-01

    A new, high-order, conservative, and efficient discontinuous spectral finite difference (SD) method for conservation laws on unstructured grids is developed. The concept of discontinuous and high-order local representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) and the Spectral Volume (SV) methods, but while these methods are based on the integrated forms of the equations, the new method is based on the differential form to attain a simpler formulation and higher efficiency. Conventional unstructured finite-difference and finite-volume methods require data reconstruction based on the least-squares formulation using neighboring point or cell data. Since each unknown employs a different stencil, one must repeat the least-squares inversion for every point or cell at each time step, or to store the inversion coefficients. In a high-order, three-dimensional computation, the former would involve impractically large CPU time, while for the latter the memory requirement becomes prohibitive. In addition, the finite-difference method does not satisfy the integral conservation in general. By contrast, the DG and SV methods employ a local, universal reconstruction of a given order of accuracy in each cell in terms of internally defined conservative unknowns. Since the solution is discontinuous across cell boundaries, a Riemann solver is necessary to evaluate boundary flux terms and maintain conservation. In the DG method, a Galerkin finite-element method is employed to update the nodal unknowns within each cell. This requires the inversion of a mass matrix, and the use of quadratures of twice the order of accuracy of the reconstruction to evaluate the surface integrals and additional volume integrals for nonlinear flux functions. In the SV method, the integral conservation law is used to update volume averages over subcells defined by a geometrically similar partition of each grid cell. As the order of accuracy increases, the partitioning for 3D requires the introduction of a large number of parameters, whose optimization to achieve convergence becomes increasingly more difficult. Also, the number of interior facets required to subdivide non-planar faces, and the additional increase in the number of quadrature points for each facet, increases the computational cost greatly.

  11. Compact high-resolution spectrographs for large and extremely large telescopes: using the diffraction limit

    NASA Astrophysics Data System (ADS)

    Robertson, J. Gordon; Bland-Hawthorn, Joss

    2012-09-01

    As telescopes get larger, the size of a seeing-limited spectrograph for a given resolving power becomes larger also, and for ELTs the size will be so great that high resolution instruments of simple design will be infeasible. Solutions include adaptive optics (but not providing full correction for short wavelengths) or image slicers (which give feasible but still large instruments). Here we develop the solution proposed by Bland-Hawthorn and Horton: the use of diffraction-limited spectrographs which are compact even for high resolving power. Their use is made possible by the photonic lantern, which splits a multi-mode optical fiber into a number of single-mode fibers. We describe preliminary designs for such spectrographs, at a resolving power of R ~ 50,000. While they are small and use relatively simple optics, the challenges are to accommodate the longest possible fiber slit (hence maximum number of single-mode fibers in one spectrograph) and to accept the beam from each fiber at a focal ratio considerably faster than for most spectrograph collimators, while maintaining diffraction-limited imaging quality. It is possible to obtain excellent performance despite these challenges. We also briefly consider the number of such spectrographs required, which can be reduced by full or partial adaptive optics correction, and/or moving towards longer wavelengths.

  12. Fast Formal Analysis of Requirements via "Topoi Diagrams"

    NASA Technical Reports Server (NTRS)

    Menzies, Tim; Powell, John; Houle, Michael E.; Kelly, John C. (Technical Monitor)

    2001-01-01

    Early testing of requirements can decrease the cost of removing errors in software projects. However, unless done carefully, that testing process can significantly add to the cost of requirements analysis. We show here that requirements expressed as topoi diagrams can be built and tested cheaply using our SP2 algorithm, the formal temporal properties of a large class of topoi can be proven very quickly, in time nearly linear in the number of nodes and edges in the diagram. There are two limitations to our approach. Firstly, topoi diagrams cannot express certain complex concepts such as iteration and sub-routine calls. Hence, our approach is more useful for requirements engineering than for traditional model checking domains. Secondly, out approach is better for exploring the temporal occurrence of properties than the temporal ordering of properties. Within these restrictions, we can express a useful range of concepts currently seen in requirements engineering, and a wide range of interesting temporal properties.

  13. Parametric investigation of single-expansion-ramp nozzles at Mach numbers from 0.60 to 1.20

    NASA Technical Reports Server (NTRS)

    Capone, Francis J.; Re, Richard J.; Bare, E. Ann

    1992-01-01

    An investigation was conducted in the Langley 16-Foot Transonic Tunnel to determine the effects of varying six nozzle geometric parameters on the internal and aeropropulsive performance characteristics of single-expansion-ramp nozzles. This investigation was conducted at Mach numbers from 0.60 to 1.20, nozzle pressure ratios from 1.5 to 12, and angles of attack of 0 deg +/- 6 deg. Maximum aeropropulsive performance at a particular Mach number was highly dependent on the operating nozzle pressure ratio. For example, as the nozzle upper ramp length or angle increased, some nozzles had higher performance at a Mach number of 0.90 because of the nozzle design pressure was the same as the operating pressure ratio. Thus, selection of the various nozzle geometric parameters should be based on the mission requirements of the aircraft. A combination of large upper ramp and large lower flap boattail angles produced greater nozzle drag coefficients at Mach number greater than 0.80, primarily from shock-induced separation on the lower flap of the nozzle. A static conditions, the convergent nozzle had high and nearly constant values of resultant thrust ratio over the entire range of nozzle pressure ratios tested. However, these nozzles had much lower aeropropulsive performance than the convergent-divergent nozzle at Mach number greater than 0.60.

  14. Facilitating large-scale clinical trials: in Asia.

    PubMed

    Choi, Han Yong; Ko, Jae-Wook

    2010-01-01

    The number of clinical trials conducted in Asian countries has started to increase as a result of expansion of the pharmaceutical market in this area. There is a growing opportunity for large-scale clinical trials because of the large number of patients, significant market potential, good quality of data, and the cost effective and qualified medical infrastructure. However, for carrying out large-scale clinical trials in Asia, there are several major challenges, including the quality control of data, budget control, laboratory validation, monitoring capacity, authorship, staff training, and nonstandard treatment that need to be considered. There are also several difficulties in collaborating on international trials in Asia because Asia is an extremely diverse continent. The major challenges are language differences, diversity of patterns of disease, and current treatments, a large gap in the experience with performing multinational trials, and regulatory differences among the Asian countries. In addition, there are also differences in the understanding of global clinical trials, medical facilities, indemnity assurance, and culture, including food and religion. To make regional and local data provide evidence for efficacy through the standardization of these differences, unlimited effort is required. At this time, there are no large clinical trials led by urologists in Asia, but it is anticipated that the role of urologists in clinical trials will continue to increase. Copyright © 2010 Elsevier Inc. All rights reserved.

  15. A National Perspective: An Analysis of Factors That Influence Special Educators to Remain in the Field of Education

    ERIC Educational Resources Information Center

    Nickson, Lautrice M.; Kritsonis, William Allan

    2006-01-01

    The purpose of this article is to analyze factors that influence special educators to remain in the field of education. School administrators are perplexed by the large number of teachers who decide to leave the field of education after three years. The retention rates of special educators' require school administrators to focus on developing a…

  16. Inactive Wells: Economic and Policy Issues

    NASA Astrophysics Data System (ADS)

    Krupnick, A.

    2016-12-01

    This paper examines the economic and policy issues associated with various types of inactive oil and gas wells. It covers the costs of decommissioning wells, and compares them to the bonding requirements on these wells, looking at a large number of states. It also reviews the detailed regulations governing treatment of inactive wells by states and the federal government and compares them according to their completeness and stringency.

  17. Meeting the Health Care Needs of Students with Severe Disabilities in the School Setting: Collaboration between School Nurses and Special Education Teachers

    ERIC Educational Resources Information Center

    Pufpaff, Lisa A.; Mcintosh, Constance E.; Thomas, Cynthia; Elam, Megan; Irwin, Mary Kay

    2015-01-01

    The number of students with special healthcare needs (SHCN) and severe disabilities in public schools in the United States has steadily increased in recent years, largely due to the changing landscape of public health relative to advances in medicine and medical technology. The specialized care required for these students often necessitates…

  18. Inundative release of Aphthona spp. flea beetles (Coleoptera: Chrysomelidae) as a biological "herbicide" on leafy spurge in riparian areas

    Treesearch

    R. A. Progar; G. Markin; J. Milan; T. Barbouletos; M. J. Rinella

    2010-01-01

    Inundative releases of beneficial insects are frequently used to suppress pest insects but not commonly attempted as a method of weed biological control because of the difficulty in obtaining the required large numbers of insects. The successful establishment of a flea beetle complex, mixed Aphthona lacertosa (Rosenhauer) and Aphthona nigriscutus Foundras (87 and 13%,...

  19. Using Consumer Preference Information to Increase the Reach and Impact of Media-Based Parenting Interventions in a Public Health Approach to Parenting Support

    ERIC Educational Resources Information Center

    Metzler, Carol W.; Sanders, Matthew R.; Rusby, Julie C.; Crowley, Ryann N.

    2012-01-01

    Within a public health approach to improving parenting, the mass media offer a potentially more efficient and affordable format for directly reaching a large number of parents with evidence-based parenting information than do traditional approaches to parenting interventions that require delivery by a practitioner. Little is known, however, about…

  20. Inundative release of Aphthona spp. flea beetles (Coleoptera: Chrysomelidae) as a biological "herbicide" on leafy spurge (Euphorbia esula) in riparian areas

    Treesearch

    R. A. Progar; G. P. Markin; J. Milan; T. Barbouletos; M. J. Rinella

    2013-01-01

    Inundative releases of beneficial insects are frequently used to suppress pest insects, but not commonly attempted as a method of weed biological control because of the difficulty in obtaining the required large numbers of insects. The successful establishment of a flea beetle complex, mixed Aphthona lacertosa Rosenhauer and A. nigriscutus Foudras (87% and 13%,...

  1. Management Teams and Teaching Staff: Do They Share the Same Beliefs about Obligatory CLIL Programmes and the Use of the L1?

    ERIC Educational Resources Information Center

    Doiz, Aintzane; Lasagabaster, David

    2017-01-01

    The popularity of CLIL (Content and Language Integrated Learning) continues to spread in education systems around the world. However, and despite the large number of studies recently published, we know little about how CLIL teachers and management teams feel regarding CLIL. In this paper, we analyse two contentious matters that require further…

  2. Entropic forces drive self-organization and membrane fusion by SNARE proteins

    PubMed Central

    Stratton, Benjamin S.; Warner, Jason M.; Rothman, James E.; O’Shaughnessy, Ben

    2017-01-01

    SNARE proteins are the core of the cell’s fusion machinery and mediate virtually all known intracellular membrane fusion reactions on which exocytosis and trafficking depend. Fusion is catalyzed when vesicle-associated v-SNAREs form trans-SNARE complexes (“SNAREpins”) with target membrane-associated t-SNAREs, a zippering-like process releasing ∼65 kT per SNAREpin. Fusion requires several SNAREpins, but how they cooperate is unknown and reports of the number required vary widely. To capture the collective behavior on the long timescales of fusion, we developed a highly coarse-grained model that retains key biophysical SNARE properties such as the zippering energy landscape and the surface charge distribution. In simulations the ∼65-kT zippering energy was almost entirely dissipated, with fully assembled SNARE motifs but uncomplexed linker domains. The SNAREpins self-organized into a circular cluster at the fusion site, driven by entropic forces that originate in steric–electrostatic interactions among SNAREpins and membranes. Cooperative entropic forces expanded the cluster and pulled the membranes together at the center point with high force. We find that there is no critical number of SNAREs required for fusion, but instead the fusion rate increases rapidly with the number of SNAREpins due to increasing entropic forces. We hypothesize that this principle finds physiological use to boost fusion rates to meet the demanding timescales of neurotransmission, exploiting the large number of v-SNAREs available in synaptic vesicles. Once in an unfettered cluster, we estimate ≥15 SNAREpins are required for fusion within the ∼1-ms timescale of neurotransmitter release. PMID:28490503

  3. Small- and Large-Effect Quantitative Trait Locus Interactions Underlie Variation in Yeast Sporulation Efficiency

    PubMed Central

    Lorenz, Kim; Cohen, Barak A.

    2012-01-01

    Quantitative trait loci (QTL) with small effects on phenotypic variation can be difficult to detect and analyze. Because of this a large fraction of the genetic architecture of many complex traits is not well understood. Here we use sporulation efficiency in Saccharomyces cerevisiae as a model complex trait to identify and study small-effect QTL. In crosses where the large-effect quantitative trait nucleotides (QTN) have been genetically fixed we identify small-effect QTL that explain approximately half of the remaining variation not explained by the major effects. We find that small-effect QTL are often physically linked to large-effect QTL and that there are extensive genetic interactions between small- and large-effect QTL. A more complete understanding of quantitative traits will require a better understanding of the numbers, effect sizes, and genetic interactions of small-effect QTL. PMID:22942125

  4. Comments on the MIT Assessment of the Mars One Plan

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2015-01-01

    The MIT assessment of the Mars One mission plan reveals design assumptions that would cause significant difficulties. Growing crops in the crew chamber produces excessive oxygen levels. The assumed in-situ resource utilization (ISRU) equipment has too low a Technology Readiness Level (TRL). The required spare parts cause a large and increasing launch mass logistics burden. The assumed International Space Station (ISS) Environmental Control and Life Support (ECLS) technologies were developed for microgravity and therefore are not suitable for Mars gravity. Growing food requires more mass than sending food from Earth. The large number of spares is due to the relatively low reliability of ECLS and the low TRL of ISRU. The Mars One habitat design is similar to past concepts but does not incorporate current knowledge. The MIT architecture analysis tool for long-term settlements on the Martian surface includes an ECLS system simulation, an ISRU sizing model, and an analysis of required spares. The MIT tool showed the need for separate crop and crew chambers, the large spare parts logistics, that crops require more mass than Earth food, and that more spares are needed if reliability is lower. That ISRU has low TRL and ISS ECLS was designed for microgravity are well known. Interestingly, the results produced by the architecture analysis tool - separate crop chamber, large spares mass, large crop chamber mass, and low reliability requiring more spares - were also well known. A common approach to ECLS architecture analysis is to build a complex model that is intended to be all-inclusive and is hoped will help solve all design problems. Such models can struggle to replicate obvious and well-known results and are often unable to answer unanticipated new questions. A better approach would be to survey the literature for background knowledge and then directly analyze the important problems.

  5. Using All-Sky Imaging to Improve Telescope Scheduling (Abstract)

    NASA Astrophysics Data System (ADS)

    Cole, G. M.

    2017-12-01

    (Abstract only) Automated scheduling makes it possible for a small telescope to observe a large number of targets in a single night. But when used in areas which have less-than-perfect sky conditions such automation can lead to large numbers of observations of clouds and haze. This paper describes the development of a "sky-aware" telescope automation system that integrates the data flow from an SBIG AllSky340c camera with an enhanced dispatch scheduler to make optimum use of the available observing conditions for two highly instrumented backyard telescopes. Using the minute-by-minute time series image stream and a self-maintained reference database, the software maintains a file of sky brightness, transparency, stability, and forecasted visibility at several hundred grid positions. The scheduling software uses this information in real time to exclude targets obscured by clouds and select the best observing task, taking into account the requirements and limits of each instrument.

  6. Dynamic Responses of Flexible Cylinders with Low Mass Ratio

    NASA Astrophysics Data System (ADS)

    Olaoye, Abiodun; Wang, Zhicheng; Triantafyllou, Michael

    2017-11-01

    Flexible cylinders with low mass ratios such as composite risers are attractive in the offshore industry because they require lower top tension and are less likely to buckle under self-weight compared to steel risers. However, their relatively low stiffness characteristics make them more vulnerable to vortex induced vibrations. Additionally, numerical investigation of the dynamic responses of such structures based on realistic conditions is limited by high Reynolds number, complex sheared flow profile, large aspect ratio and low mass ratio challenges. In the framework of Fourier spectral/hp element method, the current technique employs entropy-viscosity method (EVM) based large-eddy simulation approach for flow solver and fictitious added mass method for structure solver. The combination of both methods can handle fluid-structure interaction problems at high Reynolds number with low mass ratio. A validation of the numerical approach is provided by comparison with experiments.

  7. Alternating Magnetic Field Forces for Satellite Formation Flying

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Nurge, Mark A.; Starr, Stnaley O.

    2012-01-01

    Selected future space missions, such as large aperture telescopes and multi-component interferometers, will require the precise positioning of a number of isolated satellites, yet many of the suggested approaches for providing satellites positioning forces have serious limitations. In this paper we propose a new approach, capable of providing both position and orientation forces, that resolves or alleviates many of these problems. We show that by using alternating fields and currents that finely-controlled forces can be induced on the satellites, which can be individually selected through frequency allocation. We also show, through analysis and experiment, that near field operation is feasible and can provide sufficient force and the necessary degrees of freedom to accurately position and orient small satellites relative to one another. In particular, the case of a telescope with a large number of free mirrors is developed to provide an example of the concept. We. also discuss the far field extension of this concept.

  8. Solving very large, sparse linear systems on mesh-connected parallel computers

    NASA Technical Reports Server (NTRS)

    Opsahl, Torstein; Reif, John

    1987-01-01

    The implementation of Pan and Reif's Parallel Nested Dissection (PND) algorithm on mesh connected parallel computers is described. This is the first known algorithm that allows very large, sparse linear systems of equations to be solved efficiently in polylog time using a small number of processors. How the processor bound of PND can be matched to the number of processors available on a given parallel computer by slowing down the algorithm by constant factors is described. Also, for the important class of problems where G(A) is a grid graph, a unique memory mapping that reduces the inter-processor communication requirements of PND to those that can be executed on mesh connected parallel machines is detailed. A description of an implementation on the Goodyear Massively Parallel Processor (MPP), located at Goddard is given. Also, a detailed discussion of data mappings and performance issues is given.

  9. Forecasting drought risks for a water supply storage system using bootstrap position analysis

    USGS Publications Warehouse

    Tasker, Gary; Dunne, Paul

    1997-01-01

    Forecasting the likelihood of drought conditions is an integral part of managing a water supply storage and delivery system. Position analysis uses a large number of possible flow sequences as inputs to a simulation of a water supply storage and delivery system. For a given set of operating rules and water use requirements, water managers can use such a model to forecast the likelihood of specified outcomes such as reservoir levels falling below a specified level or streamflows falling below statutory passing flows a few months ahead conditioned on the current reservoir levels and streamflows. The large number of possible flow sequences are generated using a stochastic streamflow model with a random resampling of innovations. The advantages of this resampling scheme, called bootstrap position analysis, are that it does not rely on the unverifiable assumption of normality and it allows incorporation of long-range weather forecasts into the analysis.

  10. An efficient approach for inverse kinematics and redundancy resolution scheme of hyper-redundant manipulators

    NASA Astrophysics Data System (ADS)

    Chembuly, V. V. M. J. Satish; Voruganti, Hari Kumar

    2018-04-01

    Hyper redundant manipulators have a large number of degrees of freedom (DOF) than the required to perform a given task. Additional DOF of manipulators provide the flexibility to work in highly cluttered environment and in constrained workspaces. Inverse kinematics (IK) of hyper-redundant manipulators is complicated due to large number of DOF and these manipulators have multiple IK solutions. The redundancy gives a choice of selecting best solution out of multiple solutions based on certain criteria such as obstacle avoidance, singularity avoidance, joint limit avoidance and joint torque minimization. This paper focuses on IK solution and redundancy resolution of hyper-redundant manipulator using classical optimization approach. Joint positions are computed by optimizing various criteria for a serial hyper redundant manipulators while traversing different paths in the workspace. Several cases are addressed using this scheme to obtain the inverse kinematic solution while optimizing the criteria like obstacle avoidance, joint limit avoidance.

  11. Negative exchange interactions in coupled few-electron quantum dots

    NASA Astrophysics Data System (ADS)

    Deng, Kuangyin; Calderon-Vargas, F. A.; Mayhall, Nicholas J.; Barnes, Edwin

    2018-06-01

    It has been experimentally shown that negative exchange interactions can arise in a linear three-dot system when a two-electron double quantum dot is exchange coupled to a larger quantum dot containing on the order of one hundred electrons. The origin of this negative exchange can be traced to the larger quantum dot exhibiting a spin tripletlike rather than singletlike ground state. Here we show using a microscopic model based on the configuration interaction (CI) method that both tripletlike and singletlike ground states are realized depending on the number of electrons. In the case of only four electrons, a full CI calculation reveals that tripletlike ground states occur for sufficiently large dots. These results hold for symmetric and asymmetric quantum dots in both Si and GaAs, showing that negative exchange interactions are robust in few-electron double quantum dots and do not require large numbers of electrons.

  12. An improved method for bivariate meta-analysis when within-study correlations are unknown.

    PubMed

    Hong, Chuan; D Riley, Richard; Chen, Yong

    2018-03-01

    Multivariate meta-analysis, which jointly analyzes multiple and possibly correlated outcomes in a single analysis, is becoming increasingly popular in recent years. An attractive feature of the multivariate meta-analysis is its ability to account for the dependence between multiple estimates from the same study. However, standard inference procedures for multivariate meta-analysis require the knowledge of within-study correlations, which are usually unavailable. This limits standard inference approaches in practice. Riley et al proposed a working model and an overall synthesis correlation parameter to account for the marginal correlation between outcomes, where the only data needed are those required for a separate univariate random-effects meta-analysis. As within-study correlations are not required, the Riley method is applicable to a wide variety of evidence synthesis situations. However, the standard variance estimator of the Riley method is not entirely correct under many important settings. As a consequence, the coverage of a function of pooled estimates may not reach the nominal level even when the number of studies in the multivariate meta-analysis is large. In this paper, we improve the Riley method by proposing a robust variance estimator, which is asymptotically correct even when the model is misspecified (ie, when the likelihood function is incorrect). Simulation studies of a bivariate meta-analysis, in a variety of settings, show a function of pooled estimates has improved performance when using the proposed robust variance estimator. In terms of individual pooled estimates themselves, the standard variance estimator and robust variance estimator give similar results to the original method, with appropriate coverage. The proposed robust variance estimator performs well when the number of studies is relatively large. Therefore, we recommend the use of the robust method for meta-analyses with a relatively large number of studies (eg, m≥50). When the sample size is relatively small, we recommend the use of the robust method under the working independence assumption. We illustrate the proposed method through 2 meta-analyses. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Fast algorithm for radio propagation modeling in realistic 3-D urban environment

    NASA Astrophysics Data System (ADS)

    Rauch, A.; Lianghai, J.; Klein, A.; Schotten, H. D.

    2015-11-01

    Next generation wireless communication systems will consist of a large number of mobile or static terminals and should be able to fulfill multiple requirements depending on the current situation. Low latency and high packet success transmission rates should be mentioned in this context and can be summarized as ultra-reliable communications (URC). Especially for domains like mobile gaming, mobile video services but also for security relevant scenarios like traffic safety, traffic control systems and emergency management URC will be more and more required to guarantee a working communication between the terminals all the time.

  14. Pouching a draining duodenal cutaneous fistula: a case study.

    PubMed

    Zwanziger, P J

    1999-01-01

    Blockage of the mesenteric artery typically causes necrosis to the colon, requiring extensive surgical resection. In severe cases, the necrosis requires removal of the entire colon, creating numerous problems for the WOC nurse when pouching the opening created for effluent. This article describes the management of a draining duodenal fistula in a middle-aged woman, who survived surgery for a blocked mesenteric artery that necessitated the removal of the majority of the small and large intestine. Nutrition, skin management, and pouch options are described over a number of months as the fistula evolved and a stoma was created.

  15. Lightning mapper sensor design study

    NASA Technical Reports Server (NTRS)

    Eaton, L. R.; Poon, C. W.; Shelton, J. C.; Laverty, N. P.; Cook, R. D.

    1983-01-01

    World-wide continuous measurement of lightning location, intensity, and time during both day and night is to be provided by the Lightning Mapper (LITMAP) instrument. A technology assessment to determine if the LITMAP requirements can be met using existing sensor and electronic technologies is presented. The baseline concept discussed in this report is a compromise among a number of opposing requirements (e.g., ground resolution versus array size; large field of view versus narrow bandpass filter). The concept provides coverage for more than 80 percent of the lightning events as based on recent above-cloud NASA/U2 lightning measurements.

  16. Determination of $sup 241$Am in soil using an automated nuclear radiation measurement laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engstrom, D.E.; White, M.G.; Dunaway, P.B.

    The recent completion of REECo's Automated Laboratory and associated software systems has provided a significant increase in capability while reducing manpower requirements. The system is designed to perform gamma spectrum analyses on the large numbers of samples required by the current Nevada Applied Ecology Group (NAEG) and Plutonium Distribution Inventory Program (PDIP) soil sampling programs while maintaining sufficient sensitivities as defined by earlier investigations of the same type. The hardware and systems are generally described in this paper, with emphasis being placed on spectrum reduction and the calibration procedures used for soil samples. (auth)

  17. ANTS/SARA: Future Observation of Saturn's Rings

    NASA Astrophysics Data System (ADS)

    Clark, P. E.; Rilee, M. L.; Curtis, S. A.; Cheung, C. Y.; Mumma, M. J.

    2004-05-01

    The Saturn Autonomous Ring Array (SARA) mission concept applies the Autonomous Nano-Technology Swarm (ANTS) architecture, a paradigm developed for exploration of high surface area and/or multi-body targets. ANTS architecture involves large numbers of tiny, highly autonomous, yet socially interactive, craft, in a small number of specialist classes. SARA will acquire in situ observations in the high gravity environment of Saturn's rings. The high potential for collision represents an insurmountable challenge for previous mission designs. Each ANTS nanocraft weighs approximately a kilogram, and thus requires gossamer structures for all subsystems. Individual specialists include Workers, the vast majority, that acquire scientific measurements, as well as Messenger/Rulers that provide communication and coordination. The high density distribution of particles combines with the high intensity gravity and magnetic field environment to produce dynamic plasmas. Plasma, particle, wave, and field detectors will take measurements from the edge of the ring plane to observe the result of particle interactions. Imagers and spectrome-ters would measure variations composition and dust/gas ratio among particles using a strategy for serial rendezvous with individual particles. The numbers and distances of these particles, as well as anticipated high attrition rate, re-quire hundreds of spacecraft to characterize thousands of particles and ring features over the course of the mission. The bimodal propulsion system would include a large solar sail carrier for transporting the swarm the long distance in low gravity between deployment site and the target, and a nuclear system for each craft for maneuvering in the high gravity regime of Saturn's rings.

  18. Beyond the Kepler/K2 bright limit: variability in the seven brightest members of the Pleiades

    NASA Astrophysics Data System (ADS)

    White, T. R.; Pope, B. J. S.; Antoci, V.; Pápics, P. I.; Aerts, C.; Gies, D. R.; Gordon, K.; Huber, D.; Schaefer, G. H.; Aigrain, S.; Albrecht, S.; Barclay, T.; Barentsen, G.; Beck, P. G.; Bedding, T. R.; Fredslund Andersen, M.; Grundahl, F.; Howell, S. B.; Ireland, M. J.; Murphy, S. J.; Nielsen, M. B.; Silva Aguirre, V.; Tuthill, P. G.

    2017-11-01

    The most powerful tests of stellar models come from the brightest stars in the sky, for which complementary techniques, such as astrometry, asteroseismology, spectroscopy and interferometry, can be combined. The K2 mission is providing a unique opportunity to obtain high-precision photometric time series for bright stars along the ecliptic. However, bright targets require a large number of pixels to capture the entirety of the stellar flux, and CCD saturation, as well as restrictions on data storage and bandwidth, limit the number and brightness of stars that can be observed. To overcome this, we have developed a new photometric technique, which we call halo photometry, to observe very bright stars using a limited number of pixels. Halo photometry is simple, fast and does not require extensive pixel allocation, and will allow us to use K2 and other photometric missions, such as TESS, to observe very bright stars for asteroseismology and to search for transiting exoplanets. We apply this method to the seven brightest stars in the Pleiades open cluster. Each star exhibits variability; six of the stars show what are most likely slowly pulsating B-star pulsations, with amplitudes ranging from 20 to 2000 ppm. For the star Maia, we demonstrate the utility of combining K2 photometry with spectroscopy and interferometry to show that it is not a `Maia variable', and to establish that its variability is caused by rotational modulation of a large chemical spot on a 10 d time-scale.

  19. Emergency contraception: Knowledge and practice among women and the spouses seeking termination of pregnancy.

    PubMed

    Kathpalia, S K

    2016-04-01

    India was one of the first countries to launch a formal family planning program. Initially, the main thrust of the program was on sterilization but subsequently it has got evolved and now the stress is to bring about awareness of contraception and make informed choices. Emergency contraception has been included in its armamentarium. This study was conducted to find out about the awareness among the cases who report for induced abortion. A total of 784 willing cases were enrolled in the study; there were no exclusion criteria except unwillingness. A parallel group was also included consisting of their spouses. Information that was being sought about Emergency Contraception (EC) included its knowledge, details of administration, and availability. Of the 784 cases, a large number, 742 (94.6%), underwent first trimester abortion and only 42 (5.3%) underwent second trimester abortion. 286 (36.4%) patients had not used any contraceptive. A large number had used natural methods (35.3%), like lactation, abstinence, or coitus interruptus, and 25.7% had used barrier contraception inconsistently. A very small percentage in both the groups knew about EC; more number of men knew about EC than women. Awareness about emergency contraception is low, as reported in many other studies, though it is available for many years. Awareness about contraceptives needs to be improved and emergency contraceptive should be advocated as a backup method. More efforts are required to generate awareness about regular use of effective contraception and emergency contraception if required.

  20. Need for surgical treatment of epilepsy and excision of tumors and post-traumatic epileptogenic lesions in Kinshasa, RDC.

    PubMed

    Ntsambi-Eba, G; Beltchika Kalubye, A; Kalala Okito, J P

    2017-11-01

    Surgery is a treatment to consider in epilepsy when the condition is refractory or epileptic events are related to a clearly identified brain abnormality. The tropical climate of the DRC explains the high risk of epilepsy and the potentially large number of refractory cases. The number of patients with epilepsy in Kinshasa is estimated to be at least 120 000, and almost one third may be refractory. Hence, the need to integrate the use of surgery in the treatment of this disease. Most neurosurgical techniques used for treating epilepsy are practiced with a neurosurgical microscope and neuronavigation. In most developing countries, neither the material conditions for optimum realization of these surgical techniques nor the equipment for epilepsy investigation are close to fully available. Nonetheless, the selection of a large number of patients for surgery often does not require the use of all these explorations.The current availability in Kinshasa of the equipment for the basic investigation of epilepsy, such as EEG and MRI instruments, and the experience of the local neurological/neurosurgical team together make it possible to diagnose this pathology and treat it surgically when necessary. The creation of a multidisciplinary team for epilepsy will enable the selection of candidates who can most effectively benefit from surgical treatment. This surgery should focus initially on well circumscribed lesions that do not require sophisticated methods of investigation and can be removed relatively easily, with a high probability of seizure suppression.

  1. Predicting top-of-atmosphere radiance for arbitrary viewing geometries from the visible to thermal infrared: generalization to arbitrary average scene temperatures

    NASA Astrophysics Data System (ADS)

    Florio, Christopher J.; Cota, Steve A.; Gaffney, Stephanie K.

    2010-08-01

    In a companion paper presented at this conference we described how The Aerospace Corporation's Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) may be used in conjunction with a limited number of runs of AFRL's MODTRAN4 radiative transfer code, to quickly predict the top-of-atmosphere (TOA) radiance received in the visible through midwave IR (MWIR) by an earth viewing sensor, for any arbitrary combination of solar and sensor elevation angles. The method is particularly useful for large-scale scene simulations where each pixel could have a unique value of reflectance/emissivity and temperature, making the run-time required for direct prediction via MODTRAN4 prohibitive. In order to be self-consistent, the method described requires an atmospheric model (defined, at a minimum, as a set of vertical temperature, pressure and water vapor profiles) that is consistent with the average scene temperature. MODTRAN4 provides only six model atmospheres, ranging from sub-arctic winter to tropical conditions - too few to cover with sufficient temperature resolution the full range of average scene temperatures that might be of interest. Model atmospheres consistent with intermediate temperature values can be difficult to come by, and in any event, their use would be too cumbersome for use in trade studies involving a large number of average scene temperatures. In this paper we describe and assess a method for predicting TOA radiance for any arbitrary average scene temperature, starting from only a limited number of model atmospheres.

  2. An improved filter elution and cell culture assay procedure for evaluating public groundwater systems for culturable enteroviruses.

    PubMed

    Dahling, Daniel R

    2002-01-01

    Large-scale virus studies of groundwater systems require practical and sensitive procedures for both sample processing and viral assay. Filter adsorption-elution procedures have traditionally been used to process large-volume water samples for viruses. In this study, five filter elution procedures using cartridge filters were evaluated for their effectiveness in processing samples. Of the five procedures tested, the third method, which incorporated two separate beef extract elutions (one being an overnight filter immersion in beef extract), recovered 95% of seeded poliovirus compared with recoveries of 36 to 70% for the other methods. For viral enumeration, an expanded roller bottle quantal assay was evaluated using seeded poliovirus. This cytopathic-based method was considerably more sensitive than the standard plaque assay method. The roller bottle system was more economical than the plaque assay for the evaluation of comparable samples. Using roller bottles required less time and manipulation than the plaque procedure and greatly facilitated the examination of large numbers of samples. The combination of the improved filter elution procedure and the roller bottle assay for viral analysis makes large-scale virus studies of groundwater systems practical. This procedure was subsequently field tested during a groundwater study in which large-volume samples (exceeding 800 L) were processed through the filters.

  3. Improved argument-FFT frequency offset estimation for QPSK coherent optical Systems

    NASA Astrophysics Data System (ADS)

    Han, Jilong; Li, Wei; Yuan, Zhilin; Li, Haitao; Huang, Liyan; Hu, Qianggao

    2016-02-01

    A frequency offset estimation (FOE) algorithm based on fast Fourier transform (FFT) of the signal's argument is investigated, which does not require removing the modulated data phase. In this paper, we analyze the flaw of the argument-FFT algorithm and propose a combined FOE algorithm, in which the absolute of frequency offset (FO) is accurately calculated by argument-FFT algorithm with a relatively large number of samples and the sign of FO is determined by FFT-based interpolation discrete Fourier transformation (DFT) algorithm with a relatively small number of samples. Compared with the previous algorithms based on argument-FFT, the proposed one has low complexity and can still effectively work with a relatively less number of samples.

  4. Space Fed Subarray Synthesis Using Displaced Feed Location

    NASA Astrophysics Data System (ADS)

    Mailloux, Robert J.

    2002-01-01

    Wideband space-fed subarray systems are often proposed for large airborne or spaceborne scanning array applications. These systems allow the introduction of time delay devices at the subarray input terminals while using phase shifters in the array face. This can sometimes reduce the number of time delayed controls by an order of magnitude or more. The implementation of this technology has been slowed because the feed network, usually a Rotman Lens or Butler Matrix, is bulky, heavy and often has significant RF loss. In addition, the large lens aperture is necessarily filled with phase shifters, and so it introduces further loss, weight, and perhaps unacceptable phase shifter control power. These systems are currently viewed with increased interest because combination of low loss, low power MEMS phase shifters in the main aperture and solid state T/R modules in the feed might lead to large scanning arrays with much higher efficiency then previously realizable. Unfortunately, the conventional system design imposes an extremely large dynamic range requirement when used in the transmit mode, and requires very high output power from the T/R modules. This paper presents one possible solution to this problem using a modified feed geometry.

  5. Time simulation of flutter with large stiffness changes

    NASA Technical Reports Server (NTRS)

    Karpel, M.; Wieseman, C. D.

    1992-01-01

    Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few a priori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.

  6. Time simulation of flutter with large stiffness changes

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay; Wieseman, Carol D.

    1992-01-01

    Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for a basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness, and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few apriori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.

  7. A global probabilistic tsunami hazard assessment from earthquake sources

    USGS Publications Warehouse

    Davies, Gareth; Griffin, Jonathan; Lovholt, Finn; Glimsdal, Sylfest; Harbitz, Carl; Thio, Hong Kie; Lorito, Stefano; Basili, Roberto; Selva, Jacopo; Geist, Eric L.; Baptista, Maria Ana

    2017-01-01

    Large tsunamis occur infrequently but have the capacity to cause enormous numbers of casualties, damage to the built environment and critical infrastructure, and economic losses. A sound understanding of tsunami hazard is required to underpin management of these risks, and while tsunami hazard assessments are typically conducted at regional or local scales, globally consistent assessments are required to support international disaster risk reduction efforts, and can serve as a reference for local and regional studies. This study presents a global-scale probabilistic tsunami hazard assessment (PTHA), extending previous global-scale assessments based largely on scenario analysis. Only earthquake sources are considered, as they represent about 80% of the recorded damaging tsunami events. Globally extensive estimates of tsunami run-up height are derived at various exceedance rates, and the associated uncertainties are quantified. Epistemic uncertainties in the exceedance rates of large earthquakes often lead to large uncertainties in tsunami run-up. Deviations between modelled tsunami run-up and event observations are quantified, and found to be larger than suggested in previous studies. Accounting for these deviations in PTHA is important, as it leads to a pronounced increase in predicted tsunami run-up for a given exceedance rate.

  8. Next-generation prognostic assessment for diffuse large B-cell lymphoma

    PubMed Central

    Staton, Ashley D; Kof, Jean L; Chen, Qiushi; Ayer, Turgay; Flowers, Christopher R

    2015-01-01

    Current standard of care therapy for diffuse large B-cell lymphoma (DLBCL) cures a majority of patients with additional benefit in salvage therapy and autologous stem cell transplant for patients who relapse. The next generation of prognostic models for DLBCL aims to more accurately stratify patients for novel therapies and risk-adapted treatment strategies. This review discusses the significance of host genetic and tumor genomic alterations seen in DLBCL, clinical and epidemiologic factors, and how each can be integrated into risk stratification algorithms. In the future, treatment prediction and prognostic model development and subsequent validation will require data from a large number of DLBCL patients to establish sufficient statistical power to correctly predict outcome. Novel modeling approaches can augment these efforts. PMID:26289217

  9. Next-generation prognostic assessment for diffuse large B-cell lymphoma.

    PubMed

    Staton, Ashley D; Koff, Jean L; Chen, Qiushi; Ayer, Turgay; Flowers, Christopher R

    2015-01-01

    Current standard of care therapy for diffuse large B-cell lymphoma (DLBCL) cures a majority of patients with additional benefit in salvage therapy and autologous stem cell transplant for patients who relapse. The next generation of prognostic models for DLBCL aims to more accurately stratify patients for novel therapies and risk-adapted treatment strategies. This review discusses the significance of host genetic and tumor genomic alterations seen in DLBCL, clinical and epidemiologic factors, and how each can be integrated into risk stratification algorithms. In the future, treatment prediction and prognostic model development and subsequent validation will require data from a large number of DLBCL patients to establish sufficient statistical power to correctly predict outcome. Novel modeling approaches can augment these efforts.

  10. Data-driven indexing mechanism for the recognition of polyhedral objects

    NASA Astrophysics Data System (ADS)

    McLean, Stewart; Horan, Peter; Caelli, Terry M.

    1992-02-01

    This paper is concerned with the problem of searching large model databases. To date, most object recognition systems have concentrated on the problem of matching using simple searching algorithms. This is quite acceptable when the number of object models is small. However, in the future, general purpose computer vision systems will be required to recognize hundreds or perhaps thousands of objects and, in such circumstances, efficient searching algorithms will be needed. The problem of searching a large model database is one which must be addressed if future computer vision systems are to be at all effective. In this paper we present a method we call data-driven feature-indexed hypothesis generation as one solution to the problem of searching large model databases.

  11. Quantitative semi-automated analysis of morphogenesis with single-cell resolution in complex embryos.

    PubMed

    Giurumescu, Claudiu A; Kang, Sukryool; Planchon, Thomas A; Betzig, Eric; Bloomekatz, Joshua; Yelon, Deborah; Cosman, Pamela; Chisholm, Andrew D

    2012-11-01

    A quantitative understanding of tissue morphogenesis requires description of the movements of individual cells in space and over time. In transparent embryos, such as C. elegans, fluorescently labeled nuclei can be imaged in three-dimensional time-lapse (4D) movies and automatically tracked through early cleavage divisions up to ~350 nuclei. A similar analysis of later stages of C. elegans development has been challenging owing to the increased error rates of automated tracking of large numbers of densely packed nuclei. We present Nucleitracker4D, a freely available software solution for tracking nuclei in complex embryos that integrates automated tracking of nuclei in local searches with manual curation. Using these methods, we have been able to track >99% of all nuclei generated in the C. elegans embryo. Our analysis reveals that ventral enclosure of the epidermis is accompanied by complex coordinated migration of the neuronal substrate. We can efficiently track large numbers of migrating nuclei in 4D movies of zebrafish cardiac morphogenesis, suggesting that this approach is generally useful in situations in which the number, packing or dynamics of nuclei present challenges for automated tracking.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bromberger, Seth A.; Klymko, Christine F.; Henderson, Keith A.

    Betweenness centrality is a graph statistic used to nd vertices that are participants in a large number of shortest paths in a graph. This centrality measure is commonly used in path and network interdiction problems and its complete form requires the calculation of all-pairs shortest paths for each vertex. This leads to a time complexity of O(jV jjEj), which is impractical for large graphs. Estimation of betweenness centrality has focused on performing shortest-path calculations on a subset of randomly- selected vertices. This reduces the complexity of the centrality estimation to O(jSjjEj); jSj < jV j, which can be scaled appropriatelymore » based on the computing resources available. An estimation strategy that uses random selection of vertices for seed selection is fast and simple to implement, but may not provide optimal estimation of betweenness centrality when the number of samples is constrained. Our experimentation has identi ed a number of alternate seed-selection strategies that provide lower error than random selection in common scale-free graphs. These strategies are discussed and experimental results are presented.« less

  13. Characterization of Sound Radiation by Unresolved Scales of Motion in Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert; Zhou, Ye

    1999-01-01

    Evaluation of the sound sources in a high Reynolds number turbulent flow requires time-accurate resolution of an extremely large number of scales of motion. Direct numerical simulations will therefore remain infeasible for the forseeable future: although current large eddy simulation methods can resolve the largest scales of motion accurately the, they must leave some scales of motion unresolved. A priori studies show that acoustic power can be underestimated significantly if the contribution of these unresolved scales is simply neglected. In this paper, the problem of evaluating the sound radiation properties of the unresolved, subgrid-scale motions is approached in the spirit of the simplest subgrid stress models: the unresolved velocity field is treated as isotropic turbulence with statistical descriptors, evaluated from the resolved field. The theory of isotropic turbulence is applied to derive formulas for the total power and the power spectral density of the sound radiated by a filtered velocity field. These quantities are compared with the corresponding quantities for the unfiltered field for a range of filter widths and Reynolds numbers.

  14. A rapid method to increase the number of F₁ plants in pea (Pisum sativum) breeding programs.

    PubMed

    Espósito, M A; Almirón, P; Gatti, I; Cravero, V P; Anido, F S L; Cointry, E L

    2012-08-16

    In breeding programs, a large number of F₂ individuals are required to perform the selection process properly, but often few such plants are available. In order to obtain more F₂ seeds, it is necessary to multiply the F₁ plants. We developed a rapid, efficient and reproducible protocol for in vitro shoot regeneration and rooting of seeds using 6-benzylaminopurine. To optimize shoot regeneration, basic medium contained Murashige and Skoog (MS) salts with or without B5 Gamborg vitamins and different concentrations of 6-benzylaminopurine (25, 50 and 75 μM) using five genotypes. We found that modified MS (B5 vitamins + 25 μM 6-benzylaminopurine) is suitable for in vitro shoot regeneration of pea. Thirty-eight hybrid combinations were transferred onto selected medium to produce shoots that were used for root induction on MS medium supplemented with α-naphthalene-acetic acid. Elongated shoots were developed from all hybrid genotypes. This procedure can be used in pea breeding programs and will allow working with a large number of plants even when the F₁ plants produce few seeds.

  15. Stochastic Template Bank for Gravitational Wave Searches for Precessing Neutron Star-Black Hole Coalescence Events

    NASA Technical Reports Server (NTRS)

    Indik, Nathaniel; Haris, K.; Dal Canton, Tito; Fehrmann, Henning; Krishnan, Badri; Lundgren, Andrew; Nielsen, Alex B.; Pai, Archana

    2017-01-01

    Gravitational wave searches to date have largely focused on non-precessing systems. Including precession effects greatly increases the number of templates to be searched over. This leads to a corresponding increase in the computational cost and can increase the false alarm rate of a realistic search. On the other hand, there might be astrophysical systems that are entirely missed by non-precessing searches. In this paper we consider the problem of constructing a template bank using stochastic methods for neutron star-black hole binaries allowing for precession, but with the restrictions that the total angular momentum of the binary is pointing toward the detector and that the neutron star spin is negligible relative to that of the black hole. We quantify the number of templates required for the search, and we explicitly construct the template bank. We show that despite the large number of templates, stochastic methods can be adapted to solve the problem. We quantify the parameter space region over which the non-precessing search might miss signals.

  16. Targeting SMN to Cajal bodies and nuclear gems during neuritogenesis

    PubMed Central

    Navascues, Joaquin; Berciano, Maria T.; Tucker, Karen E.

    2006-01-01

    Neurite outgrowth is a central feature of neuronal differentiation. PC12 cells are a good model system for studying the peripheral nervous system and the outgrowth of neurites. In addition to the dramatic changes observed in the cytoplasm, neuronal differentiation is also accompanied by striking changes in nuclear morphology. The large and sustained increase in nuclear transcription during neuronal differentiation requires synthesis of a large number of factors involved in pre-mRNA processing. We show that the number and composition of the nuclear subdomains called Cajal bodies and gems changes during the course of N-ras-induced neuritogenesis in the PC12-derived cell line UR61. The Cajal bodies found in undifferentiated cells are largely devoid of the survival of motor neurons (SMN) protein product. As cells shift to a differentiated state, SMN is not only globally upregulated, but is progressively recruited to Cajal bodies. Additional SMN foci (also known as Gemini bodies, gems) can also be detected. Using dual-immunogold labeling electron microscopy and mouse embryonic fibroblasts lacking the coilin protein, we show that gems clearly represent a distinct category of nuclear body. PMID:15164213

  17. Population attribute compression

    DOEpatents

    White, James M.; Faber, Vance; Saltzman, Jeffrey S.

    1995-01-01

    An image population having a large number of attributes is processed to form a display population with a predetermined smaller number of attributes that represent the larger number of attributes. In a particular application, the color values in an image are compressed for storage in a discrete look-up table (LUT). Color space containing the LUT color values is successively subdivided into smaller volumes until a plurality of volumes are formed, each having no more than a preselected maximum number of color values. Image pixel color values can then be rapidly placed in a volume with only a relatively few LUT values from which a nearest neighbor is selected. Image color values are assigned 8 bit pointers to their closest LUT value whereby data processing requires only the 8 bit pointer value to provide 24 bit color values from the LUT.

  18. Extreme-phenotype genome-wide association study (XP-GWAS): a method for identifying trait-associated variants by sequencing pools of individuals selected from a diversity panel.

    PubMed

    Yang, Jinliang; Jiang, Haiying; Yeh, Cheng-Ting; Yu, Jianming; Jeddeloh, Jeffrey A; Nettleton, Dan; Schnable, Patrick S

    2015-11-01

    Although approaches for performing genome-wide association studies (GWAS) are well developed, conventional GWAS requires high-density genotyping of large numbers of individuals from a diversity panel. Here we report a method for performing GWAS that does not require genotyping of large numbers of individuals. Instead XP-GWAS (extreme-phenotype GWAS) relies on genotyping pools of individuals from a diversity panel that have extreme phenotypes. This analysis measures allele frequencies in the extreme pools, enabling discovery of associations between genetic variants and traits of interest. This method was evaluated in maize (Zea mays) using the well-characterized kernel row number trait, which was selected to enable comparisons between the results of XP-GWAS and conventional GWAS. An exome-sequencing strategy was used to focus sequencing resources on genes and their flanking regions. A total of 0.94 million variants were identified and served as evaluation markers; comparisons among pools showed that 145 of these variants were statistically associated with the kernel row number phenotype. These trait-associated variants were significantly enriched in regions identified by conventional GWAS. XP-GWAS was able to resolve several linked QTL and detect trait-associated variants within a single gene under a QTL peak. XP-GWAS is expected to be particularly valuable for detecting genes or alleles responsible for quantitative variation in species for which extensive genotyping resources are not available, such as wild progenitors of crops, orphan crops, and other poorly characterized species such as those of ecological interest. © 2015 The Authors The Plant Journal published by Society for Experimental Biology and John Wiley & Sons Ltd.

  19. Project Dawdler: a Proposal in Response to a Low Reynolds Number Station Keeping Mission

    NASA Technical Reports Server (NTRS)

    Bartilotti, Rich; Coakley, Jill; Golla, Warren; Scamman, Glenn; Tran, Hoa T.; Trippel, Chris

    1990-01-01

    In direct response to Request for Proposals: Flight at very low Reynolds numbers - a station keeping mission, the members of Design Squad E present Project Dawdler: a remotely-piloted airplane supported by an independently controlled take-off cart. A brief introduction to Project Dawdler's overall mission and design, is given. The Dawdler is a remotely-piloted airplane designed to fly in an environmentally-controlled closed course at a Reynolds number of 10(exp 5) and at a cruise velocity of 25 ft/s. The two primary goals were to minimize the flight Reynolds number and to maximize the loiter time. With this in mind, the general design of the airplane was guided by the belief that a relatively light aircraft producing a fairly large amount of lift would be the best approach. For this reason the Dawdler utilizes a canard rather than a conventional tail for longitudinal control, primarily because the canard contributes a positive lift component. The Dawdler also has a single vertical tail mounted behind the wing for lateral stability, half of which is used as a rudder for yaw control. Due to the fact that the power required to take-off and climb to altitude is much greater than that required for cruise flight and simple turning maneuvers, it was decided that a take-off cart be used. Based on the current design, there are two unknowns which could possibly threaten the success of Project Dawdler. First, the effect of the fully-movable canard with its large appropriation of total lift on the performance of the plane, and secondly, the ability of the take-off procedure to go as planned are examined. These are questions which can only be answered by a prototype.

  20. Improving breeding efficiency in potato using molecular and quantitative genetics.

    PubMed

    Slater, Anthony T; Cogan, Noel O I; Hayes, Benjamin J; Schultz, Lee; Dale, M Finlay B; Bryan, Glenn J; Forster, John W

    2014-11-01

    Potatoes are highly heterozygous and the conventional breeding of superior germplasm is challenging, but use of a combination of MAS and EBVs can accelerate genetic gain. Cultivated potatoes are highly heterozygous due to their outbreeding nature, and suffer acute inbreeding depression. Modern potato cultivars also exhibit tetrasomic inheritance. Due to this genetic heterogeneity, the large number of target traits and the specific requirements of commercial cultivars, potato breeding is challenging. A conventional breeding strategy applies phenotypic recurrent selection over a number of generations, a process which can take over 10 years. Recently, major advances in genetics and molecular biology have provided breeders with molecular tools to accelerate gains for some traits. Marker-assisted selection (MAS) can be effectively used for the identification of major genes and quantitative trait loci that exhibit large effects. There are also a number of complex traits of interest, such as yield, that are influenced by a large number of genes of individual small effect where MAS will be difficult to deploy. Progeny testing and the use of pedigree in the analysis can provide effective identification of the superior genetic factors that underpin these complex traits. Recently, it has been shown that estimated breeding values (EBVs) can be developed for complex potato traits. Using a combination of MAS and EBVs for simple and complex traits can lead to a significant reduction in the length of the breeding cycle for the identification of superior germplasm.

Top