Sample records for techniques require large

  1. The role of artificial intelligence techniques in scheduling systems

    NASA Technical Reports Server (NTRS)

    Geoffroy, Amy L.; Britt, Daniel L.; Gohring, John R.

    1990-01-01

    Artificial Intelligence (AI) techniques provide good solutions for many of the problems which are characteristic of scheduling applications. However, scheduling is a large, complex heterogeneous problem. Different applications will require different solutions. Any individual application will require the use of a variety of techniques, including both AI and conventional software methods. The operational context of the scheduling system will also play a large role in design considerations. The key is to identify those places where a specific AI technique is in fact the preferable solution, and to integrate that technique into the overall architecture.

  2. Large space systems technology electronics: Data and power distribution

    NASA Technical Reports Server (NTRS)

    Dunbar, W. G.

    1980-01-01

    The development of hardware technology and manufacturing techniques required to meet space platform and antenna system needs in the 1980s is discussed. Preliminary designs for manned and automatically assembled space power system cables, connectors, and grounding and bonding materials and techniques are reviewed. Connector concepts, grounding design requirements, and bonding requirements are discussed. The problem of particulate debris contamination for large structure spacecraft is addressed.

  3. Choosing the Most Effective Pattern Classification Model under Learning-Time Constraint.

    PubMed

    Saito, Priscila T M; Nakamura, Rodrigo Y M; Amorim, Willian P; Papa, João P; de Rezende, Pedro J; Falcão, Alexandre X

    2015-01-01

    Nowadays, large datasets are common and demand faster and more effective pattern analysis techniques. However, methodologies to compare classifiers usually do not take into account the learning-time constraints required by applications. This work presents a methodology to compare classifiers with respect to their ability to learn from classification errors on a large learning set, within a given time limit. Faster techniques may acquire more training samples, but only when they are more effective will they achieve higher performance on unseen testing sets. We demonstrate this result using several techniques, multiple datasets, and typical learning-time limits required by applications.

  4. Techniques for increasing the efficiency of Earth gravity calculations for precision orbit determination

    NASA Technical Reports Server (NTRS)

    Smith, R. L.; Lyubomirsky, A. S.

    1981-01-01

    Two techniques were analyzed. The first is a representation using Chebyshev expansions in three-dimensional cells. The second technique employs a temporary file for storing the components of the nonspherical gravity force. Computer storage requirements and relative CPU time requirements are presented. The Chebyshev gravity representation can provide a significant reduction in CPU time in precision orbit calculations, but at the cost of a large amount of direct-access storage space, which is required for a global model.

  5. Program Design for Retrospective Searches on Large Data Bases

    ERIC Educational Resources Information Center

    Thiel, L. H.; Heaps, H. S.

    1972-01-01

    Retrospective search of large data bases requires development of special techniques for automatic compression of data and minimization of the number of input-output operations to the computer files. The computer program should require a relatively small amount of internal memory. This paper describes the structure of such a program. (9 references)…

  6. Generating unstructured nuclear reactor core meshes in parallel

    DOE PAGES

    Jain, Rajeev; Tautges, Timothy J.

    2014-10-24

    Recent advances in supercomputers and parallel solver techniques have enabled users to run large simulations problems using millions of processors. Techniques for multiphysics nuclear reactor core simulations are under active development in several countries. Most of these techniques require large unstructured meshes that can be hard to generate in a standalone desktop computers because of high memory requirements, limited processing power, and other complexities. We have previously reported on a hierarchical lattice-based approach for generating reactor core meshes. Here, we describe efforts to exploit coarse-grained parallelism during reactor assembly and reactor core mesh generation processes. We highlight several reactor coremore » examples including a very high temperature reactor, a full-core model of the Korean MONJU reactor, a ¼ pressurized water reactor core, the fast reactor Experimental Breeder Reactor-II core with a XX09 assembly, and an advanced breeder test reactor core. The times required to generate large mesh models, along with speedups obtained from running these problems in parallel, are reported. A graphical user interface to the tools described here has also been developed.« less

  7. Large Animal Models of an In Vivo Bioreactor for Engineering Vascularized Bone.

    PubMed

    Akar, Banu; Tatara, Alexander M; Sutradhar, Alok; Hsiao, Hui-Yi; Miller, Michael; Cheng, Ming-Huei; Mikos, Antonios G; Brey, Eric M

    2018-04-12

    Reconstruction of large skeletal defects is challenging due to the requirement for large volumes of donor tissue and the often complex surgical procedures. Tissue engineering has the potential to serve as a new source of tissue for bone reconstruction, but current techniques are often limited in regards to the size and complexity of tissue that can be formed. Building tissue using an in vivo bioreactor approach may enable the production of appropriate amounts of specialized tissue, while reducing issues of donor site morbidity and infection. Large animals are required to screen and optimize new strategies for growing clinically appropriate volumes of tissues in vivo. In this article, we review both ovine and porcine models that serve as models of the technique proposed for clinical engineering of bone tissue in vivo. Recent findings are discussed with these systems, as well as description of next steps required for using these models, to develop clinically applicable tissue engineering applications.

  8. Assessing estimation techniques for missing plot observations in the U.S. forest inventory

    Treesearch

    Grant M. Domke; Christopher W. Woodall; Ronald E. McRoberts; James E. Smith; Mark A. Hatfield

    2012-01-01

    The U.S. Forest Service, Forest Inventory and Analysis Program made a transition from state-by-state periodic forest inventories--with reporting standards largely tailored to regional requirements--to a nationally consistent, annual inventory tailored to large-scale strategic requirements. Lack of measurements on all forest land during the periodic inventory, along...

  9. Hydraulic hoisting and backfilling

    NASA Astrophysics Data System (ADS)

    Sauermann, H. B.

    In a country such as South Africa, with its large deep level mining industry, improvements in mining and hoisting techniques could result in substantial savings. Hoisting techniques, for example, may be improved by the introduction of hydraulic hoisting. The following are some of the advantages of hydraulic hoisting as against conventional skip hoisting: (1) smaller shafts are required because the pipes to hoist the same quantity of ore hydraulically require less space in the shaft than does skip hoisting equipment; (2) the hoisting capacity of a mine can easily be increased without the necessity of sinking new shafts. Large savings in capital costs can thus be made; (3) fully automatic control is possible with hydraulic hoisting and therefore less manpower is required; and (4) health and safety conditions will be improved.

  10. Signature extension: An approach to operational multispectral surveys

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F.; Morgenstern, J. P.

    1973-01-01

    Two data processing techniques were suggested as applicable to the large area survey problem. One approach was to use unsupervised classification (clustering) techniques. Investigation of this method showed that since the method did nothing to reduce the signal variability, the use of this method would be very time consuming and possibly inaccurate as well. The conclusion is that unsupervised classification techniques of themselves are not a solution to the large area survey problem. The other method investigated was the use of signature extension techniques. Such techniques function by normalizing the data to some reference condition. Thus signatures from an isolated area could be used to process large quantities of data. In this manner, ground information requirements and computer training are minimized. Several signature extension techniques were tested. The best of these allowed signatures to be extended between data sets collected four days and 80 miles apart with an average accuracy of better than 90%.

  11. Microwash or macrowash technique to maintain a clear cornea during cataract surgery.

    PubMed

    Amjadi, Shahriar; Roufas, Athena; Figueira, Edwin C; Bhardwaj, Gaurav; Francis, Katherine E; Masselos, Katherine; Francis, Ian C

    2010-09-01

    We describe a technique of irrigating and thereby rapidly and effectively clearing the cornea of relatively large amounts of surface contaminants that reduce surgical visibility and may contribute to endophthalmitis. This technique is referred to as "macrowash." If the technique is required, it is usually at the commencement of cataract surgery, immediately after placement of the surgical drape. The technique not only saves time, but also reduces the volume of irrigating solution required by the "microwash" technique, which is traditionally carried out by the scrub nurse/surgical assistant using a Rycroft cannula attached to a 15 mL container of irrigating solution. Copyright (c) 2010 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  12. Space Construction Automated Fabrication Experiment Definition Study (SCAFEDS), part 3. Volume 3: Requirements

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The performance, design and verification requirements for the space Construction Automated Fabrication Experiment (SCAFE) are defined. The SCAFE program defines, develops, and demonstrates the techniques, processes, and equipment required for the automatic fabrication of structural elements in space and for the assembly of such elements into a large, lightweight structure. The program defines a large structural platform to be constructed in orbit using the space shuttle as a launch vehicle and construction base.

  13. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, Roy H.; Beckman-Davies, C. S.; Benzinger, L.; Beshers, G.; Laliberte, D.; Render, H.; Sum, R.; Smith, W.; Terwilliger, R.

    1986-01-01

    Research into software development is required to reduce its production cost and to improve its quality. Modern software systems, such as the embedded software required for NASA's space station initiative, stretch current software engineering techniques. The requirements to build large, reliable, and maintainable software systems increases with time. Much theoretical and practical research is in progress to improve software engineering techniques. One such technique is to build a software system or environment which directly supports the software engineering process, i.e., the SAGA project, comprising the research necessary to design and build a software development which automates the software engineering process. Progress under SAGA is described.

  14. A Case of Pediatric Abdominal Wall Reconstruction: Components Separation within the Austere War Environment

    PubMed Central

    Sabino, Jennifer; Kumar, Anand

    2014-01-01

    Summary: Reconstructive surgeons supporting military operations are required to definitively treat severe pediatric abdominal injuries in austere environments. The safety and efficacy of using a components separation technique to treat large ventral hernias in pediatric patients in this setting remains understudied. Components separation technique was required to achieve definitive closure in a 12-month-old pediatric patient in Kandahar, Afghanistan. Her course was complicated by an anastomotic leak after small bowel resection. Her abdominal was successfully reopened, the leak repaired, and closed primarily without incident on postinjury day 9. Abdominal trauma with a large ventral hernia requiring components separation is extremely rare. A pediatric patient treated with components separation demonstrated minimal complications, avoidance of abdominal compartment syndrome, and no mortality. PMID:25426363

  15. Electromagnetic Counter-Counter Measure (ECCM) Techniques of the Digital Microwave Radio.

    DTIC Science & Technology

    1982-05-01

    Frequency hopping requires special synthesizers and filter banks. Large bandwidth expansion in a microwave radio relay application can best be achieved with...34 processing gain " performance as a function of jammer modulation type " pulse jammer performance • emission bandwidth and spectral shaping 0... spectral efficiency, implementation complexity, and suitability for ECCK techniques will be considered. A sumary of the requirements and characteristics of

  16. LSI/VLSI design for testability analysis and general approach

    NASA Technical Reports Server (NTRS)

    Lam, A. Y.

    1982-01-01

    The incorporation of testability characteristics into large scale digital design is not only necessary for, but also pertinent to effective device testing and enhancement of device reliability. There are at least three major DFT techniques, namely, the self checking, the LSSD, and the partitioning techniques, each of which can be incorporated into a logic design to achieve a specific set of testability and reliability requirements. Detailed analysis of the design theory, implementation, fault coverage, hardware requirements, application limitations, etc., of each of these techniques are also presented.

  17. Memory efficient solution of the primitive equations for numerical weather prediction on the CYBER 205

    NASA Technical Reports Server (NTRS)

    Tuccillo, J. J.

    1984-01-01

    Numerical Weather Prediction (NWP), for both operational and research purposes, requires only fast computational speed but also large memory. A technique for solving the Primitive Equations for atmospheric motion on the CYBER 205, as implemented in the Mesoscale Atmospheric Simulation System, which is fully vectorized and requires substantially less memory than other techniques such as the Leapfrog or Adams-Bashforth Schemes is discussed. The technique presented uses the Euler-Backard time marching scheme. Also discussed are several techniques for reducing computational time of the model by replacing slow intrinsic routines by faster algorithms which use only hardware vector instructions.

  18. Fuel containment, lightning protection and damage tolerance in large composite primary aircraft structures

    NASA Technical Reports Server (NTRS)

    Griffin, Charles F.; James, Arthur M.

    1985-01-01

    The damage-tolerance characteristics of high strain-to-failure graphite fibers and toughened resins were evaluated. Test results show that conventional fuel tank sealing techniques are applicable to composite structures. Techniques were developed to prevent fuel leaks due to low-energy impact damage. For wing panels subjected to swept stroke lightning strikes, a surface protection of graphite/aluminum wire fabric and a fastener treatment proved effective in eliminating internal sparking and reducing structural damage. The technology features developed were incorporated and demonstrated in a test panel designed to meet the strength, stiffness, and damage tolerance requirements of a large commercial transport aircraft. The panel test results exceeded design requirements for all test conditions. Wing surfaces constructed with composites offer large weight savings if design allowable strains for compression can be increased from current levels.

  19. Solving the shrinkage-induced PDMS alignment registration issue in multilayer soft lithography

    NASA Astrophysics Data System (ADS)

    Moraes, Christopher; Sun, Yu; Simmons, Craig A.

    2009-06-01

    Shrinkage of polydimethylsiloxane (PDMS) complicates alignment registration between layers during multilayer soft lithography fabrication. This often hinders the development of large-scale microfabricated arrayed devices. Here we report a rapid method to construct large-area, multilayered devices with stringent alignment requirements. This technique, which exploits a previously unrecognized aspect of sandwich mold fabrication, improves device yield, enables highly accurate alignment over large areas of multilayered devices and does not require strict regulation of fabrication conditions or extensive calibration processes. To demonstrate this technique, a microfabricated Braille display was developed and characterized. High device yield and accurate alignment within 15 µm were achieved over three layers for an array of 108 Braille units spread over a 6.5 cm2 area, demonstrating the fabrication of well-aligned devices with greater ease and efficiency than previously possible.

  20. Requirements and principles for the implementation and construction of large-scale geographic information systems

    NASA Technical Reports Server (NTRS)

    Smith, Terence R.; Menon, Sudhakar; Star, Jeffrey L.; Estes, John E.

    1987-01-01

    This paper provides a brief survey of the history, structure and functions of 'traditional' geographic information systems (GIS), and then suggests a set of requirements that large-scale GIS should satisfy, together with a set of principles for their satisfaction. These principles, which include the systematic application of techniques from several subfields of computer science to the design and implementation of GIS and the integration of techniques from computer vision and image processing into standard GIS technology, are discussed in some detail. In particular, the paper provides a detailed discussion of questions relating to appropriate data models, data structures and computational procedures for the efficient storage, retrieval and analysis of spatially-indexed data.

  1. Deriving Earth Science Data Analytics Tools/Techniques Requirements

    NASA Astrophysics Data System (ADS)

    Kempler, S. J.

    2015-12-01

    Data Analytics applications have made successful strides in the business world where co-analyzing extremely large sets of independent variables have proven profitable. Today, most data analytics tools and techniques, sometimes applicable to Earth science, have targeted the business industry. In fact, the literature is nearly absent of discussion about Earth science data analytics. Earth science data analytics (ESDA) is the process of examining large amounts of data from a variety of sources to uncover hidden patterns, unknown correlations, and other useful information. ESDA is most often applied to data preparation, data reduction, and data analysis. Co-analysis of increasing number and volume of Earth science data has become more prevalent ushered by the plethora of Earth science data sources generated by US programs, international programs, field experiments, ground stations, and citizen scientists. Through work associated with the Earth Science Information Partners (ESIP) Federation, ESDA types have been defined in terms of data analytics end goals. Goals of which are very different than those in business, requiring different tools and techniques. A sampling of use cases have been collected and analyzed in terms of data analytics end goal types, volume, specialized processing, and other attributes. The goal of collecting these use cases is to be able to better understand and specify requirements for data analytics tools and techniques yet to be implemented. This presentation will describe the attributes and preliminary findings of ESDA use cases, as well as provide early analysis of data analytics tools/techniques requirements that would support specific ESDA type goals. Representative existing data analytics tools/techniques relevant to ESDA will also be addressed.

  2. Surgical technique for reconstruction of the nasal septum: the pericranial flap.

    PubMed

    Paloma, V; Samper, A; Cervera-Paz, F J

    2000-01-01

    We describe a new technique for the surgical reconstruction of large-sized anterior septal perforations based on the pericranial flap. The technique requires a standard open rhinoplasty combined with a pericranial flap harvested after a bicoronal approach and tunnelled to the nasal cavity. We present the case of a man with complete destruction of the nasal septum as a result of chronic cocaine abuse. Surgery resulted in a permanent and complete closure of the perforation. The main advantage of this technique is the use of well-vascularized autogenous tissue and the minimal donor site morbidity. This technique provides a new method to close large nasal perforations. Copyright 2000 John Wiley & Sons, Inc. Head Neck 22: 90-94, 2000.

  3. Chronic Subdural Hematoma Treated by Small or Large Craniotomy with Membranectomy as the Initial Treatment

    PubMed Central

    Kim, Jae-Hong; Kim, Jung-Hee; Kong, Min-Ho; Song, Kwan-Young

    2011-01-01

    Objective There are few studies comparing small and large craniotomies for the initial treatment of chronic subdural hematoma (CSDH) which had non-liquefied hematoma, multilayer intrahematomal loculations, or organization/calcification on computed tomography and magnetic resonance imaging. These procedures were compared to determine which would produce superior postoperative results. Methods Between 2001 and 2009, 317 consecutive patients were surgically treated for CSDH at our institution. Of these, 16 patients underwent a small craniotomy with partial membranectomy and 42 patients underwent a large craniotomy with extended membranectomy as the initial treatment. A retrospective review was performed to compare the postoperative outcomes of these two techniques, focusing on improvement of neurological status, complications, reoperation rate, and days of post-operative hospitalization. Results The mean ages were 69.4±12.1 and 55.6±9.3 years in the small and large craniotomy groups, respectively. The recurrence of hematomas requiring reoperation occurred in 50% and 10% of the small and large craniotomy patients, respectively (p<0.001). There were no significant differences in postoperative neurological status, complications, or days of hospital stay between these two groups. Conclusion Among the cases of CSDH initially requiring craniotomy, the large craniotomy with extended membranectomy technique reduced the reoperation rate, compared to that of the small craniotomy with partial membranectomy technique. PMID:22053228

  4. Space construction system analysis. Part 2: Space construction experiments concepts

    NASA Technical Reports Server (NTRS)

    Boddy, J. A.; Wiley, L. F.; Gimlich, G. W.; Greenberg, H. S.; Hart, R. J.; Lefever, A. E.; Lillenas, A. N.; Totah, R. S.

    1980-01-01

    Technology areas in the orbital assembly of large space structures are addressed. The areas included structures, remotely operated assembly techniques, and control and stabilization. Various large space structure design concepts are reviewed and their construction procedures and requirements are identified.

  5. Real-time monitoring of CO2 storage sites: Application to Illinois Basin-Decatur Project

    USGS Publications Warehouse

    Picard, G.; Berard, T.; Chabora, E.; Marsteller, S.; Greenberg, S.; Finley, R.J.; Rinck, U.; Greenaway, R.; Champagnon, C.; Davard, J.

    2011-01-01

    Optimization of carbon dioxide (CO2) storage operations for efficiency and safety requires use of monitoring techniques and implementation of control protocols. The monitoring techniques consist of permanent sensors and tools deployed for measurement campaigns. Large amounts of data are thus generated. These data must be managed and integrated for interpretation at different time scales. A fast interpretation loop involves combining continuous measurements from permanent sensors as they are collected to enable a rapid response to detected events; a slower loop requires combining large datasets gathered over longer operational periods from all techniques. The purpose of this paper is twofold. First, it presents an analysis of the monitoring objectives to be performed in the slow and fast interpretation loops. Second, it describes the implementation of the fast interpretation loop with a real-time monitoring system at the Illinois Basin-Decatur Project (IBDP) in Illinois, USA. ?? 2011 Published by Elsevier Ltd.

  6. Successful Instructional Strategies for the Community College Student.

    ERIC Educational Resources Information Center

    Franse, Stephen R.

    1992-01-01

    Describes methods which have proven effective in motivating largely nontraditional and bilingual community college students to succeed. Suggests that teachers require writing, use graphs/illustrations creatively, use primary sources as required readings, provide supplementary readings, use brainstorming and quality circle techniques, and prepare…

  7. Avoiding and tolerating latency in large-scale next-generation shared-memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Probst, David K.

    1993-01-01

    A scalable solution to the memory-latency problem is necessary to prevent the large latencies of synchronization and memory operations inherent in large-scale shared-memory multiprocessors from reducing high performance. We distinguish latency avoidance and latency tolerance. Latency is avoided when data is brought to nearby locales for future reference. Latency is tolerated when references are overlapped with other computation. Latency-avoiding locales include: processor registers, data caches used temporally, and nearby memory modules. Tolerating communication latency requires parallelism, allowing the overlap of communication and computation. Latency-tolerating techniques include: vector pipelining, data caches used spatially, prefetching in various forms, and multithreading in various forms. Relaxing the consistency model permits increased use of avoidance and tolerance techniques. Each model is a mapping from the program text to sets of partial orders on program operations; it is a convention about which temporal precedences among program operations are necessary. Information about temporal locality and parallelism constrains the use of avoidance and tolerance techniques. Suitable architectural primitives and compiler technology are required to exploit the increased freedom to reorder and overlap operations in relaxed models.

  8. Adaptive optical system for writing large holographic optical elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tyutchev, M.V.; Kalyashov, E.V.; Pavlov, A.P.

    1994-11-01

    This paper formulates the requirements imposed on systems for correcting the phase-difference distribution of recording waves over the field of a large-diameter photographic plate ({le}1.5 m) when writing holographic optical elements (HOEs). A technique is proposed for writing large HOEs, based on the use of an adaptive phase-correction optical system of the first type, controlled by the self-diffraction signal from a latent image. The technique is implemented by writing HOEs on photographic plates with an effective diameter of 0.7 m on As{sub 2}S{sub 3} layers. 13 refs., 4 figs.

  9. Adaptive parametric model order reduction technique for optimization of vibro-acoustic models: Application to hearing aid design

    NASA Astrophysics Data System (ADS)

    Creixell-Mediante, Ester; Jensen, Jakob S.; Naets, Frank; Brunskog, Jonas; Larsen, Martin

    2018-06-01

    Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we present an adaptive pMOR technique where the construction of the projection basis is embedded in the optimization process and requires fewer full system analyses, while the accuracy of the reduced system is monitored by a cheap error indicator. The performance of the proposed method is evaluated for a 4-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system.

  10. Comparison of different estimation techniques for biomass concentration in large scale yeast fermentation.

    PubMed

    Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U

    2011-04-01

    In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Attenuated phase-shift mask (PSM) blanks for flat panel display

    NASA Astrophysics Data System (ADS)

    Kageyama, Kagehiro; Mochizuki, Satoru; Yamakawa, Hiroyuki; Uchida, Shigeru

    2015-10-01

    The fine pattern exposure techniques are required for Flat Panel display applications as smart phone, tablet PC recently. The attenuated phase shift masks (PSM) are being used for ArF and KrF photomask lithography technique for high end pattern Semiconductor applications. We developed CrOx based large size PSM blanks that has good uniformity on optical characteristics for FPD applications. We report the basic optical characteristics and uniformity, stability data of large sized CrOx PSM blanks.

  12. Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering

    NASA Astrophysics Data System (ADS)

    Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki

    2018-03-01

    We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.

  13. Applying Standard Independent Verification and Validation (IVV) Techniques Within an Agile Framework: Is There a Compatibility Issue?

    NASA Technical Reports Server (NTRS)

    Dabney, James B.; Arthur, James Douglas

    2017-01-01

    Agile methods have gained wide acceptance over the past several years, to the point that they are now a standard management and execution approach for small-scale software development projects. While conventional Agile methods are not generally applicable to large multi-year and mission-critical systems, Agile hybrids are now being developed (such as SAFe) to exploit the productivity improvements of Agile while retaining the necessary process rigor and coordination needs of these projects. From the perspective of Independent Verification and Validation (IVV), however, the adoption of these hybrid Agile frameworks is becoming somewhat problematic. Hence, we find it prudent to question the compatibility of conventional IVV techniques with (hybrid) Agile practices.This paper documents our investigation of (a) relevant literature, (b) the modification and adoption of Agile frameworks to accommodate the development of large scale, mission critical systems, and (c) the compatibility of standard IVV techniques within hybrid Agile development frameworks. Specific to the latter, we found that the IVV methods employed within a hybrid Agile process can be divided into three groups: (1) early lifecycle IVV techniques that are fully compatible with the hybrid lifecycles, (2) IVV techniques that focus on tracing requirements, test objectives, etc. are somewhat incompatible, but can be tailored with a modest effort, and (3) IVV techniques involving an assessment requiring artifact completeness that are simply not compatible with hybrid Agile processes, e.g., those that assume complete requirement specification early in the development lifecycle.

  14. Large mirror surface control by corrective coating

    NASA Astrophysics Data System (ADS)

    Bonnand, Romain; Degallaix, Jerome; Flaminio, Raffaele; Giacobone, Laurent; Lagrange, Bernard; Marion, Fréderique; Michel, Christophe; Mours, Benoit; Mugnier, Pierre; Pacaud, Emmanuel; Pinard, Laurent

    2013-08-01

    The Advanced Virgo gravitational wave detector aims at a sensitivity ten times better than the initial LIGO and Virgo detectors. This implies very stringent requirement on the optical losses in the interferometer arm cavities. In this paper we focus on the mirrors which form the interferometer arm cavities and that require a surface figure error to be well below one nanometre on a diameter of 150 mm. This ‘sub-nanometric flatness’ is not achievable by classical polishing on such a large diameter. Therefore we present the corrective coating technique which has been developed to reach this requirement. Its principle is to add a non-uniform thin film on top of the substrate in order to flatten its surface. In this paper we will introduce the Advanced Virgo requirements and present the basic principle of the corrective coating technique. Then we show the results obtained experimentally on an initial Virgo substrate. Finally we provide an evaluation of the round-trip losses in the Fabry-Perot arm cavities once the corrected surface is used.

  15. Industrial metrology as applied to large physics experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veal, D.

    1993-05-01

    A physics experiment is a large complex 3-D object (typ. 1200 m{sup 3}, 35000 tonnes), with sub-millimetric alignment requirements. Two generic survey alignment tasks can be identified; first, an iterative positioning of the apparatus subsystems in space and, second, a quantification of as-built parameters. The most convenient measurement technique is industrial triangulation but the complexity of the measured object and measurement environment constraints frequently requires a more sophisticated approach. To enlarge the ``survey alignment toolbox`` measurement techniques commonly associated with other disciplines such as geodesy, applied geodesy for accelerator alignment, and mechanical engineering are also used. Disparate observables require amore » heavy reliance on least squares programs for campaign pre-analysis and calculation. This paper will offer an introduction to the alignment of physics experiments and will identify trends for the next generation of SSC experiments.« less

  16. Performance of pile-up mitigation techniques for jets in pp collisions at √{s}=8 TeV using the ATLAS detector

    NASA Astrophysics Data System (ADS)

    Aad, G.; Abbott, B.; Abdallah, J.; Abdinov, O.; Aben, R.; Abolins, M.; AbouZeid, O. S.; Abramowicz, H.; Abreu, H.; Abreu, R.; Abulaiti, Y.; Acharya, B. S.; Adamczyk, L.; Adams, D. L.; Adelman, J.; Adomeit, S.; Adye, T.; Affolder, A. A.; Agatonovic-Jovin, T.; Agricola, J.; Aguilar-Saavedra, J. A.; Ahlen, S. P.; Ahmadov, F.; Aielli, G.; Akerstedt, H.; Åkesson, T. P. A.; Akimov, A. V.; Alberghi, G. L.; Albert, J.; Albrand, S.; Verzini, M. J. Alconada; Aleksa, M.; Aleksandrov, I. N.; Alexa, C.; Alexander, G.; Alexopoulos, T.; Alhroob, M.; Alimonti, G.; Alio, L.; Alison, J.; Alkire, S. P.; Allbrooke, B. M. M.; Allport, P. P.; Aloisio, A.; Alonso, A.; Alonso, F.; Alpigiani, C.; Altheimer, A.; Gonzalez, B. Alvarez; Piqueras, D. Álvarez; Alviggi, M. G.; Amadio, B. T.; Amako, K.; Coutinho, Y. Amaral; Amelung, C.; Amidei, D.; Dos Santos, S. P. Amor; Amorim, A.; Amoroso, S.; Amram, N.; Amundsen, G.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, G.; Anders, J. K.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Angelidakis, S.; Angelozzi, I.; Anger, P.; Angerami, A.; Anghinolfi, F.; Anisenkov, A. V.; Anjos, N.; Annovi, A.; Antonelli, M.; Antonov, A.; Antos, J.; Anulli, F.; Aoki, M.; Bella, L. Aperio; Arabidze, G.; Arai, Y.; Araque, J. P.; Arce, A. T. H.; Arduh, F. A.; Arguin, J.-F.; Argyropoulos, S.; Arik, M.; Armbruster, A. J.; Arnaez, O.; Arnal, V.; Arnold, H.; Arratia, M.; Arslan, O.; Artamonov, A.; Artoni, G.; Asai, S.; Asbah, N.; Ashkenazi, A.; Åsman, B.; Asquith, L.; Assamagan, K.; Astalos, R.; Atkinson, M.; Atlay, N. B.; Augsten, K.; Aurousseau, M.; Avolio, G.; Axen, B.; Ayoub, M. K.; Azuelos, G.; Baak, M. A.; Baas, A. E.; Baca, M. J.; Bacci, C.; Bachacou, H.; Bachas, K.; Backes, M.; Backhaus, M.; Bagiacchi, P.; Bagnaia, P.; Bai, Y.; Bain, T.; Baines, J. T.; Baker, O. K.; Baldin, E. M.; Balek, P.; Balestri, T.; Balli, F.; Banas, E.; Banerjee, Sw.; Bannoura, A. A. E.; Bansil, H. S.; Barak, L.; Barberio, E. L.; Barberis, D.; Barbero, M.; Barillari, T.; Barisonzi, M.; Barklow, T.; Barlow, N.; Barnes, S. L.; Barnett, B. M.; Barnett, R. M.; Barnovska, Z.; Baroncelli, A.; Barone, G.; Barr, A. J.; Barreiro, F.; da Costa, J. Barreiro Guimarães; Bartoldus, R.; Barton, A. E.; Bartos, P.; Basalaev, A.; Bassalat, A.; Basye, A.; Bates, R. L.; Batista, S. J.; Batley, J. R.; Battaglia, M.; Bauce, M.; Bauer, F.; Bawa, H. S.; Beacham, J. B.; Beattie, M. D.; Beau, T.; Beauchemin, P. H.; Beccherle, R.; Bechtle, P.; Beck, H. P.; Becker, K.; Becker, M.; Becker, S.; Beckingham, M.; Becot, C.; Beddall, A. J.; Beddall, A.; Bednyakov, V. A.; Bee, C. P.; Beemster, L. J.; Beermann, T. A.; Begel, M.; Behr, J. K.; Belanger-Champagne, C.; Bell, W. H.; Bella, G.; Bellagamba, L.; Bellerive, A.; Bellomo, M.; Belotskiy, K.; Beltramello, O.; Benary, O.; Benchekroun, D.; Bender, M.; Bendtz, K.; Benekos, N.; Benhammou, Y.; Noccioli, E. Benhar; Garcia, J. A. Benitez; Benjamin, D. P.; Bensinger, J. R.; Bentvelsen, S.; Beresford, L.; Beretta, M.; Berge, D.; Kuutmann, E. Bergeaas; Berger, N.; Berghaus, F.; Beringer, J.; Bernard, C.; Bernard, N. R.; Bernius, C.; Bernlochner, F. U.; Berry, T.; Berta, P.; Bertella, C.; Bertoli, G.; Bertolucci, F.; Bertsche, C.; Bertsche, D.; Besana, M. I.; Besjes, G. J.; Bylund, O. Bessidskaia; Bessner, M.; Besson, N.; Betancourt, C.; Bethke, S.; Bevan, A. J.; Bhimji, W.; Bianchi, R. M.; Bianchini, L.; Bianco, M.; Biebel, O.; Biedermann, D.; Bieniek, S. P.; Biglietti, M.; De Mendizabal, J. Bilbao; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Biondi, S.; Black, C. W.; Black, J. E.; Black, K. M.; Blackburn, D.; Blair, R. E.; Blanchard, J.-B.; Blanco, J. E.; Blazek, T.; Bloch, I.; Blocker, C.; Blum, W.; Blumenschein, U.; Bobbink, G. J.; Bobrovnikov, V. S.; Bocchetta, S. S.; Bocci, A.; Bock, C.; Boehler, M.; Bogaerts, J. A.; Bogavac, D.; Bogdanchikov, A. G.; Bohm, C.; Boisvert, V.; Bold, T.; Boldea, V.; Boldyrev, A. S.; Bomben, M.; Bona, M.; Boonekamp, M.; Borisov, A.; Borissov, G.; Borroni, S.; Bortfeldt, J.; Bortolotto, V.; Bos, K.; Boscherini, D.; Bosman, M.; Boudreau, J.; Bouffard, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Bousson, N.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bozic, I.; Bracinik, J.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Braun, H. M.; Brazzale, S. F.; Madden, W. D. Breaden; Brendlinger, K.; Brennan, A. J.; Brenner, L.; Brenner, R.; Bressler, S.; Bristow, K.; Bristow, T. M.; Britton, D.; Britzger, D.; Brochu, F. M.; Brock, I.; Brock, R.; Bronner, J.; Brooijmans, G.; Brooks, T.; Brooks, W. K.; Brosamer, J.; Brost, E.; Brown, J.; de Renstrom, P. A. Bruckman; Bruncko, D.; Bruneliere, R.; Bruni, A.; Bruni, G.; Bruschi, M.; Bruscino, N.; Bryngemark, L.; Buanes, T.; Buat, Q.; Buchholz, P.; Buckley, A. G.; Buda, S. I.; Budagov, I. A.; Buehrer, F.; Bugge, L.; Bugge, M. K.; Bulekov, O.; Bullock, D.; Burckhart, H.; Burdin, S.; Burghgrave, B.; Burke, S.; Burmeister, I.; Busato, E.; Büscher, D.; Büscher, V.; Bussey, P.; Butler, J. M.; Butt, A. I.; Buttar, C. M.; Butterworth, J. M.; Butti, P.; Buttinger, W.; Buzatu, A.; Buzykaev, A. R.; Urbán, S. Cabrera; Caforio, D.; Cairo, V. M.; Cakir, O.; Calace, N.; Calafiura, P.; Calandri, A.; Calderini, G.; Calfayan, P.; Caloba, L. P.; Calvet, D.; Calvet, S.; Toro, R. Camacho; Camarda, S.; Camarri, P.; Cameron, D.; Armadans, R. Caminal; Campana, S.; Campanelli, M.; Campoverde, A.; Canale, V.; Canepa, A.; Bret, M. Cano; Cantero, J.; Cantrill, R.; Cao, T.; Garrido, M. D. M. Capeans; Caprini, I.; Caprini, M.; Capua, M.; Caputo, R.; Cardarelli, R.; Cardillo, F.; Carli, T.; Carlino, G.; Carminati, L.; Caron, S.; Carquin, E.; Carrillo-Montoya, G. D.; Carter, J. R.; Carvalho, J.; Casadei, D.; Casado, M. P.; Casolino, M.; Castaneda-Miranda, E.; Castelli, A.; Gimenez, V. Castillo; Castro, N. F.; Catastini, P.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Caudron, J.; Cavaliere, V.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Ceradini, F.; Cerio, B. C.; Cerny, K.; Cerqueira, A. S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cerv, M.; Cervelli, A.; Cetin, S. A.; Chafaq, A.; Chakraborty, D.; Chalupkova, I.; Chang, P.; Chapman, J. D.; Charlton, D. G.; Chau, C. C.; Barajas, C. A. Chavez; Cheatham, S.; Chegwidden, A.; Chekanov, S.; Chekulaev, S. V.; Chelkov, G. A.; Chelstowska, M. A.; Chen, C.; Chen, H.; Chen, K.; Chen, L.; Chen, S.; Chen, X.; Chen, Y.; Cheng, H. C.; Cheng, Y.; Cheplakov, A.; Cheremushkina, E.; El Moursli, R. Cherkaoui; Chernyatin, V.; Cheu, E.; Chevalier, L.; Chiarella, V.; Chiarelli, G.; Childers, J. T.; Chiodini, G.; Chisholm, A. S.; Chislett, R. T.; Chitan, A.; Chizhov, M. V.; Choi, K.; Chouridou, S.; Chow, B. K. B.; Christodoulou, V.; Chromek-Burckhart, D.; Chudoba, J.; Chuinard, A. J.; Chwastowski, J. J.; Chytka, L.; Ciapetti, G.; Ciftci, A. K.; Cinca, D.; Cindro, V.; Cioara, I. A.; Ciocio, A.; Citron, Z. H.; Ciubancan, M.; Clark, A.; Clark, B. L.; Clark, P. J.; Clarke, R. N.; Cleland, W.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Coffey, L.; Cogan, J. G.; Colasurdo, L.; Cole, B.; Cole, S.; Colijn, A. P.; Collot, J.; Colombo, T.; Compostella, G.; Muiño, P. Conde; Coniavitis, E.; Connell, S. H.; Connelly, I. A.; Consonni, S. M.; Consorti, V.; Constantinescu, S.; Conta, C.; Conti, G.; Conventi, F.; Cooke, M.; Cooper, B. D.; Cooper-Sarkar, A. M.; Cornelissen, T.; Corradi, M.; Corriveau, F.; Corso-Radu, A.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M. J.; Costanzo, D.; Côté, D.; Cottin, G.; Cowan, G.; Cox, B. E.; Cranmer, K.; Cree, G.; Crépé-Renaudin, S.; Crescioli, F.; Cribbs, W. A.; Ortuzar, M. Crispin; Cristinziani, M.; Croft, V.; Crosetti, G.; Donszelmann, T. Cuhadar; Cummings, J.; Curatolo, M.; Cuthbert, C.; Czirr, H.; Czodrowski, P.; D'Auria, S.; D'Onofrio, M.; De Sousa, M. J. Da Cunha Sargedas; Via, C. Da; Dabrowski, W.; Dafinca, A.; Dai, T.; Dale, O.; Dallaire, F.; Dallapiccola, C.; Dam, M.; Dandoy, J. R.; Dang, N. P.; Daniells, A. C.; Danninger, M.; Hoffmann, M. Dano; Dao, V.; Darbo, G.; Darmora, S.; Dassoulas, J.; Dattagupta, A.; Davey, W.; David, C.; Davidek, T.; Davies, E.; Davies, M.; Davison, P.; Davygora, Y.; Dawe, E.; Dawson, I.; Daya-Ishmukhametova, R. K.; De, K.; de Asmundis, R.; De Benedetti, A.; De Castro, S.; De Cecco, S.; De Groot, N.; de Jong, P.; De la Torre, H.; De Lorenzi, F.; De Nooij, L.; De Pedis, D.; De Salvo, A.; De Sanctis, U.; De Santo, A.; De Regie, J. B. De Vivie; Dearnaley, W. J.; Debbe, R.; Debenedetti, C.; Dedovich, D. V.; Deigaard, I.; Del Peso, J.; Del Prete, T.; Delgove, D.; Deliot, F.; Delitzsch, C. M.; Deliyergiyev, M.; Dell'Acqua, A.; Dell'Asta, L.; Dell'Orso, M.; Della Pietra, M.; della Volpe, D.; Delmastro, M.; Delsart, P. A.; Deluca, C.; DeMarco, D. A.; Demers, S.; Demichev, M.; Demilly, A.; Denisov, S. P.; Derendarz, D.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K.; Deterre, C.; Deviveiros, P. O.; Dewhurst, A.; Dhaliwal, S.; Di Ciaccio, A.; Di Ciaccio, L.; Di Domenico, A.; Di Donato, C.; Di Girolamo, A.; Di Girolamo, B.; Di Mattia, A.; Di Micco, B.; Di Nardo, R.; Di Simone, A.; Di Sipio, R.; Di Valentino, D.; Diaconu, C.; Diamond, M.; Dias, F. A.; Diaz, M. A.; Diehl, E. B.; Dietrich, J.; Diglio, S.; Dimitrievska, A.; Dingfelder, J.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djobava, T.; Djuvsland, J. I.; do Vale, M. A. B.; Dobos, D.; Dobre, M.; Doglioni, C.; Dohmae, T.; Dolejsi, J.; Dolezal, Z.; Dolgoshein, B. A.; Donadelli, M.; Donati, S.; Dondero, P.; Donini, J.; Dopke, J.; Doria, A.; Dova, M. T.; Doyle, A. T.; Drechsler, E.; Dris, M.; Dubreuil, E.; Duchovni, E.; Duckeck, G.; Ducu, O. A.; Duda, D.; Dudarev, A.; Duflot, L.; Duguid, L.; Dührssen, M.; Dunford, M.; Yildiz, H. Duran; Düren, M.; Durglishvili, A.; Duschinger, D.; Dyndal, M.; Eckardt, C.; Ecker, K. M.; Edgar, R. C.; Edson, W.; Edwards, N. C.; Ehrenfeld, W.; Eifert, T.; Eigen, G.; Einsweiler, K.; Ekelof, T.; Kacimi, M. El; Ellert, M.; Elles, S.; Ellinghaus, F.; Elliot, A. A.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Enari, Y.; Endner, O. C.; Endo, M.; Erdmann, J.; Ereditato, A.; Ernis, G.; Ernst, J.; Ernst, M.; Errede, S.; Ertel, E.; Escalier, M.; Esch, H.; Escobar, C.; Esposito, B.; Etienvre, A. I.; Etzion, E.; Evans, H.; Ezhilov, A.; Fabbri, L.; Facini, G.; Fakhrutdinov, R. M.; Falciano, S.; Falla, R. J.; Faltova, J.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farooque, T.; Farrell, S.; Farrington, S. M.; Farthouat, P.; Fassi, F.; Fassnacht, P.; Fassouliotis, D.; Giannelli, M. Faucci; Favareto, A.; Fayard, L.; Federic, P.; Fedin, O. L.; Fedorko, W.; Feigl, S.; Feligioni, L.; Feng, C.; Feng, E. J.; Feng, H.; Fenyuk, A. B.; Feremenga, L.; Martinez, P. Fernandez; Perez, S. Fernandez; Ferrando, J.; Ferrari, A.; Ferrari, P.; Ferrari, R.; de Lima, D. E. Ferreira; Ferrer, A.; Ferrere, D.; Ferretti, C.; Parodi, A. Ferretto; Fiascaris, M.; Fiedler, F.; Filipčič, A.; Filipuzzi, M.; Filthaut, F.; Fincke-Keeler, M.; Finelli, K. D.; Fiolhais, M. C. N.; Fiorini, L.; Firan, A.; Fischer, A.; Fischer, C.; Fischer, J.; Fisher, W. C.; Fitzgerald, E. A.; Flaschel, N.; Fleck, I.; Fleischmann, P.; Fleischmann, S.; Fletcher, G. T.; Fletcher, G.; Fletcher, R. R. M.; Flick, T.; Floderus, A.; Castillo, L. R. Flores; Flowerdew, M. J.; Formica, A.; Forti, A.; Fournier, D.; Fox, H.; Fracchia, S.; Francavilla, P.; Franchini, M.; Francis, D.; Franconi, L.; Franklin, M.; Frate, M.; Fraternali, M.; Freeborn, D.; French, S. T.; Friedrich, F.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Torregrosa, E. Fullana; Fulsom, B. G.; Fusayasu, T.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gabrielli, A.; Gabrielli, A.; Gach, G. P.; Gadatsch, S.; Gadomski, S.; Gagliardi, G.; Gagnon, P.; Galea, C.; Galhardo, B.; Gallas, E. J.; Gallop, B. J.; Gallus, P.; Galster, G.; Gan, K. K.; Gao, J.; Gao, Y.; Gao, Y. S.; Walls, F. M. Garay; Garberson, F.; García, C.; Navarro, J. E. García; Garcia-Sciveres, M.; Gardner, R. W.; Garelli, N.; Garonne, V.; Gatti, C.; Gaudiello, A.; Gaudio, G.; Gaur, B.; Gauthier, L.; Gauzzi, P.; Gavrilenko, I. L.; Gay, C.; Gaycken, G.; Gazis, E. N.; Ge, P.; Gecse, Z.; Gee, C. N. P.; Geerts, D. A. A.; Geich-Gimbel, Ch.; Geisler, M. P.; Gemme, C.; Genest, M. H.; Gentile, S.; George, M.; George, S.; Gerbaudo, D.; Gershon, A.; Ghasemi, S.; Ghazlane, H.; Giacobbe, B.; Giagu, S.; Giangiobbe, V.; Giannetti, P.; Gibbard, B.; Gibson, S. M.; Gilchriese, M.; Gillam, T. P. S.; Gillberg, D.; Gilles, G.; Gingrich, D. M.; Giokaris, N.; Giordani, M. P.; Giorgi, F. M.; Giorgi, F. M.; Giraud, P. F.; Giromini, P.; Giugni, D.; Giuliani, C.; Giulini, M.; Gjelsten, B. K.; Gkaitatzis, S.; Gkialas, I.; Gkougkousis, E. L.; Gladilin, L. K.; Glasman, C.; Glatzer, J.; Glaysher, P. C. F.; Glazov, A.; Goblirsch-Kolb, M.; Goddard, J. R.; Godlewski, J.; Goldfarb, S.; Golling, T.; Golubkov, D.; Gomes, A.; Gonçalo, R.; Costa, J. Goncalves Pinto Firmino Da; Gonella, L.; de la Hoz, S. González; Parra, G. Gonzalez; Gonzalez-Sevilla, S.; Goossens, L.; Gorbounov, P. A.; Gordon, H. A.; Gorelov, I.; Gorini, B.; Gorini, E.; Gorišek, A.; Gornicki, E.; Goshaw, A. T.; Gössling, C.; Gostkin, M. I.; Goujdami, D.; Goussiou, A. G.; Govender, N.; Gozani, E.; Grabas, H. M. X.; Graber, L.; Grabowska-Bold, I.; Gradin, P. O. J.; Grafström, P.; Grahn, K.-J.; Gramling, J.; Gramstad, E.; Grancagnolo, S.; Grassi, V.; Gratchev, V.; Gray, H. M.; Graziani, E.; Greenwood, Z. D.; Gregersen, K.; Gregor, I. M.; Grenier, P.; Griffiths, J.; Grillo, A. A.; Grimm, K.; Grinstein, S.; Gris, Ph.; Grivaz, J.-F.; Grohs, J. P.; Grohsjean, A.; Gross, E.; Grosse-Knetter, J.; Grossi, G. C.; Grout, Z. J.; Guan, L.; Guenther, J.; Guescini, F.; Guest, D.; Gueta, O.; Guido, E.; Guillemin, T.; Guindon, S.; Gul, U.; Gumpert, C.; Guo, J.; Guo, Y.; Gupta, S.; Gustavino, G.; Gutierrez, P.; Ortiz, N. G. Gutierrez; Gutschow, C.; Guyot, C.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haber, C.; Hadavand, H. K.; Haddad, N.; Haefner, P.; Hageböck, S.; Hajduk, Z.; Hakobyan, H.; Haleem, M.; Haley, J.; Hall, D.; Halladjian, G.; Hallewell, G. D.; Hamacher, K.; Hamal, P.; Hamano, K.; Hamer, M.; Hamilton, A.; Hamity, G. N.; Hamnett, P. G.; Han, L.; Hanagaki, K.; Hanawa, K.; Hance, M.; Hanke, P.; Hanna, R.; Hansen, J. B.; Hansen, J. D.; Hansen, M. C.; Hansen, P. H.; Hara, K.; Hard, A. S.; Harenberg, T.; Hariri, F.; Harkusha, S.; Harrington, R. D.; Harrison, P. F.; Hartjes, F.; Hasegawa, M.; Hasegawa, S.; Hasegawa, Y.; Hasib, A.; Hassani, S.; Haug, S.; Hauser, R.; Hauswald, L.; Havranek, M.; Hawkes, C. M.; Hawkings, R. J.; Hawkins, A. D.; Hayashi, T.; Hayden, D.; Hays, C. P.; Hays, J. M.; Hayward, H. S.; Haywood, S. J.; Head, S. J.; Heck, T.; Hedberg, V.; Heelan, L.; Heim, S.; Heim, T.; Heinemann, B.; Heinrich, L.; Hejbal, J.; Helary, L.; Hellman, S.; Hellmich, D.; Helsens, C.; Henderson, J.; Henderson, R. C. W.; Heng, Y.; Hengler, C.; Henkelmann, S.; Henrichs, A.; Correia, A. M. Henriques; Henrot-Versille, S.; Herbert, G. H.; Jiménez, Y. Hernández; Herrberg-Schubert, R.; Herten, G.; Hertenberger, R.; Hervas, L.; Hesketh, G. G.; Hessey, N. P.; Hetherly, J. W.; Hickling, R.; Higón-Rodriguez, E.; Hill, E.; Hill, J. C.; Hiller, K. H.; Hillier, S. J.; Hinchliffe, I.; Hines, E.; Hinman, R. R.; Hirose, M.; Hirschbuehl, D.; Hobbs, J.; Hod, N.; Hodgkinson, M. C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M. R.; Hoenig, F.; Hohlfeld, M.; Hohn, D.; Holmes, T. R.; Homann, M.; Hong, T. M.; van Huysduynen, L. Hooft; Hopkins, W. H.; Horii, Y.; Horton, A. J.; Hostachy, J.-Y.; Hou, S.; Hoummada, A.; Howard, J.; Howarth, J.; Hrabovsky, M.; Hristova, I.; Hrivnac, J.; Hryn'ova, T.; Hrynevich, A.; Hsu, C.; Hsu, P. J.; Hsu, S.-C.; Hu, D.; Hu, Q.; Hu, X.; Huang, Y.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Huffman, T. B.; Hughes, E. W.; Hughes, G.; Huhtinen, M.; Hülsing, T. A.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Ideal, E.; Idrissi, Z.; Iengo, P.; Igonkina, O.; Iizawa, T.; Ikegami, Y.; Ikematsu, K.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Ilic, N.; Ince, T.; Introzzi, G.; Ioannou, P.; Iodice, M.; Iordanidou, K.; Ippolito, V.; Quiles, A. Irles; Isaksson, C.; Ishino, M.; Ishitsuka, M.; Ishmukhametov, R.; Issever, C.; Istin, S.; Ponce, J. M. Iturbe; Iuppa, R.; Ivarsson, J.; Iwanski, W.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jabbar, S.; Jackson, B.; Jackson, M.; Jackson, P.; Jaekel, M. R.; Jain, V.; Jakobs, K.; Jakobsen, S.; Jakoubek, T.; Jakubek, J.; Jamin, D. O.; Jana, D. K.; Jansen, E.; Jansky, R.; Janssen, J.; Janus, M.; Jarlskog, G.; Javadov, N.; Javůrek, T.; Jeanty, L.; Jejelava, J.; Jeng, G.-Y.; Jennens, D.; Jenni, P.; Jentzsch, J.; Jeske, C.; Jézéquel, S.; Ji, H.; Jia, J.; Jiang, Y.; Jiggins, S.; Pena, J. Jimenez; Jin, S.; Jinaru, A.; Jinnouchi, O.; Joergensen, M. D.; Johansson, P.; Johns, K. A.; Jon-And, K.; Jones, G.; Jones, R. W. L.; Jones, T. J.; Jongmanns, J.; Jorge, P. M.; Joshi, K. D.; Jovicevic, J.; Ju, X.; Jung, C. A.; Jussel, P.; Rozas, A. Juste; Kaci, M.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagan, M.; Kahn, S. J.; Kajomovitz, E.; Kalderon, C. W.; Kama, S.; Kamenshchikov, A.; Kanaya, N.; Kaneti, S.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kaplan, L. S.; Kapliy, A.; Kar, D.; Karakostas, K.; Karamaoun, A.; Karastathis, N.; Kareem, M. J.; Karentzos, E.; Karnevskiy, M.; Karpov, S. N.; Karpova, Z. M.; Karthik, K.; Kartvelishvili, V.; Karyukhin, A. N.; Kashif, L.; Kass, R. D.; Kastanas, A.; Kataoka, Y.; Kato, C.; Katre, A.; Katzy, J.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kazama, S.; Kazanin, V. F.; Keeler, R.; Kehoe, R.; Keller, J. S.; Kempster, J. J.; Keoshkerian, H.; Kepka, O.; Kerševan, B. P.; Kersten, S.; Keyes, R. A.; Khalil-zada, F.; Khandanyan, H.; Khanov, A.; Kharlamov, A. G.; Khoo, T. J.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kim, H. Y.; Kim, H.; Kim, S. H.; Kim, Y. K.; Kimura, N.; Kind, O. M.; King, B. T.; King, M.; King, S. B.; Kirk, J.; Kiryunin, A. E.; Kishimoto, T.; Kisielewska, D.; Kiss, F.; Kiuchi, K.; Kivernyk, O.; Kladiva, E.; Klein, M. H.; Klein, M.; Klein, U.; Kleinknecht, K.; Klimek, P.; Klimentov, A.; Klingenberg, R.; Klinger, J. A.; Klioutchnikova, T.; Kluge, E.-E.; Kluit, P.; Kluth, S.; Knapik, J.; Kneringer, E.; Knoops, E. B. F. G.; Knue, A.; Kobayashi, A.; Kobayashi, D.; Kobayashi, T.; Kobel, M.; Kocian, M.; Kodys, P.; Koffas, T.; Koffeman, E.; Kogan, L. A.; Kohlmann, S.; Kohout, Z.; Kohriki, T.; Koi, T.; Kolanoski, H.; Koletsou, I.; Komar, A. A.; Komori, Y.; Kondo, T.; Kondrashova, N.; Köneke, K.; König, A. C.; Kono, T.; Konoplich, R.; Konstantinidis, N.; Kopeliansky, R.; Koperny, S.; Köpke, L.; Kopp, A. K.; Korcyl, K.; Kordas, K.; Korn, A.; Korol, A. A.; Korolkov, I.; Korolkova, E. V.; Kortner, O.; Kortner, S.; Kosek, T.; Kostyukhin, V. V.; Kotov, V. M.; Kotwal, A.; Kourkoumeli-Charalampidi, A.; Kourkoumelis, C.; Kouskoura, V.; Koutsman, A.; Kowalewski, R.; Kowalski, T. Z.; Kozanecki, W.; Kozhin, A. S.; Kramarenko, V. A.; Kramberger, G.; Krasnopevtsev, D.; Krasny, M. W.; Krasznahorkay, A.; Kraus, J. K.; Kravchenko, A.; Kreiss, S.; Kretz, M.; Kretzschmar, J.; Kreutzfeldt, K.; Krieger, P.; Krizka, K.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Krüger, H.; Krumnack, N.; Kruse, A.; Kruse, M. C.; Kruskal, M.; Kubota, T.; Kucuk, H.; Kuday, S.; Kuehn, S.; Kugel, A.; Kuger, F.; Kuhl, A.; Kuhl, T.; Kukhtin, V.; Kulchitsky, Y.; Kuleshov, S.; Kuna, M.; Kunigo, T.; Kupco, A.; Kurashige, H.; Kurochkin, Y. A.; Kus, V.; Kuwertz, E. S.; Kuze, M.; Kvita, J.; Kwan, T.; Kyriazopoulos, D.; Rosa, A. La; Navarro, J. L. La Rosa; Rotonda, L. La; Lacasta, C.; Lacava, F.; Lacey, J.; Lacker, H.; Lacour, D.; Lacuesta, V. R.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Lambourne, L.; Lammers, S.; Lampen, C. L.; Lampl, W.; Lançon, E.; Landgraf, U.; Landon, M. P. J.; Lang, V. S.; Lange, J. C.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Lanza, A.; Laplace, S.; Lapoire, C.; Laporte, J. F.; Lari, T.; Manghi, F. Lasagni; Lassnig, M.; Laurelli, P.; Lavrijsen, W.; Law, A. T.; Laycock, P.; Lazovich, T.; Dortz, O. Le; Guirriec, E. Le; Menedeu, E. Le; LeBlanc, M.; LeCompte, T.; Ledroit-Guillon, F.; Lee, C. A.; Lee, S. C.; Lee, L.; Lefebvre, G.; Lefebvre, M.; Legger, F.; Leggett, C.; Lehan, A.; Miotto, G. Lehmann; Lei, X.; Leight, W. A.; Leisos, A.; Leister, A. G.; Leite, M. A. L.; Leitner, R.; Lellouch, D.; Lemmer, B.; Leney, K. J. C.; Lenz, T.; Lenzi, B.; Leone, R.; Leone, S.; Leonidopoulos, C.; Leontsinis, S.; Leroy, C.; Lester, C. G.; Levchenko, M.; Levêque, J.; Levin, D.; Levinson, L. J.; Levy, M.; Lewis, A.; Leyko, A. M.; Leyton, M.; Li, B.; Li, H.; Li, H. L.; Li, L.; Li, L.; Li, S.; Li, Y.; Liang, Z.; Liao, H.; Liberti, B.; Liblong, A.; Lichard, P.; Lie, K.; Liebal, J.; Liebig, W.; Limbach, C.; Limosani, A.; Lin, S. C.; Lin, T. H.; Linde, F.; Lindquist, B. E.; Linnemann, J. T.; Lipeles, E.; Lipniacka, A.; Lisovyi, M.; Liss, T. M.; Lissauer, D.; Lister, A.; Litke, A. M.; Liu, B.; Liu, D.; Liu, H.; Liu, J.; Liu, J. B.; Liu, K.; Liu, L.; Liu, M.; Liu, M.; Liu, Y.; Livan, M.; Lleres, A.; Merino, J. Llorente; Lloyd, S. L.; Sterzo, F. Lo; Lobodzinska, E.; Loch, P.; Lockman, W. S.; Loebinger, F. K.; Loevschall-Jensen, A. E.; Loginov, A.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Long, B. A.; Long, J. D.; Long, R. E.; Looper, K. A.; Lopes, L.; Mateos, D. Lopez; Paredes, B. Lopez; Paz, I. Lopez; Lorenz, J.; Martinez, N. Lorenzo; Losada, M.; Loscutoff, P.; Lösel, P. J.; Lou, X.; Lounis, A.; Love, J.; Love, P. A.; Lu, N.; Lubatti, H. J.; Luci, C.; Lucotte, A.; Luehring, F.; Lukas, W.; Luminari, L.; Lundberg, O.; Lund-Jensen, B.; Lynn, D.; Lysak, R.; Lytken, E.; Ma, H.; Ma, L. L.; Maccarrone, G.; Macchiolo, A.; Macdonald, C. M.; Miguens, J. Machado; Macina, D.; Madaffari, D.; Madar, R.; Maddocks, H. J.; Mader, W. F.; Madsen, A.; Maeland, S.; Maeno, T.; Maevskiy, A.; Magradze, E.; Mahboubi, K.; Mahlstedt, J.; Maiani, C.; Maidantchik, C.; Maier, A. A.; Maier, T.; Maio, A.; Majewski, S.; Makida, Y.; Makovec, N.; Malaescu, B.; Malecki, Pa.; Maleev, V. P.; Malek, F.; Mallik, U.; Malon, D.; Malone, C.; Maltezos, S.; Malyshev, V. M.; Malyukov, S.; Mamuzic, J.; Mancini, G.; Mandelli, B.; Mandelli, L.; Mandić, I.; Mandrysch, R.; Maneira, J.; Manfredini, A.; de Andrade Filho, L. Manhaes; Ramos, J. Manjarres; Mann, A.; Manning, P. M.; Manousakis-Katsikakis, A.; Mansoulie, B.; Mantifel, R.; Mantoani, M.; Mapelli, L.; March, L.; Marchiori, G.; Marcisovsky, M.; Marino, C. P.; Marjanovic, M.; Marley, D. E.; Marroquim, F.; Marsden, S. P.; Marshall, Z.; Marti, L. F.; Marti-Garcia, S.; Martin, B.; Martin, T. A.; Martin, V. J.; Latour, B. Martin dit; Martinez, M.; Martin-Haugh, S.; Martoiu, V. S.; Martyniuk, A. C.; Marx, M.; Marzano, F.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Massa, I.; Massa, L.; Massol, N.; Mastrandrea, P.; Mastroberardino, A.; Masubuchi, T.; Mättig, P.; Mattmann, J.; Maurer, J.; Maxfield, S. J.; Maximov, D. A.; Mazini, R.; Mazza, S. M.; Mazzaferro, L.; Goldrick, G. Mc; Kee, S. P. Mc; McCarn, A.; McCarthy, R. L.; McCarthy, T. G.; McCubbin, N. A.; McFarlane, K. W.; Mcfayden, J. A.; Mchedlidze, G.; McMahon, S. J.; McPherson, R. A.; Medinnis, M.; Meehan, S.; Mehlhase, S.; Mehta, A.; Meier, K.; Meineck, C.; Meirose, B.; Garcia, B. R. Mellado; Meloni, F.; Mengarelli, A.; Menke, S.; Meoni, E.; Mercurio, K. M.; Mergelmeyer, S.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F. S.; Messina, A.; Metcalfe, J.; Mete, A. S.; Meyer, C.; Meyer, C.; Meyer, J.-P.; Meyer, J.; Middleton, R. P.; Miglioranzi, S.; Mijović, L.; Mikenberg, G.; Mikestikova, M.; Mikuž, M.; Milesi, M.; Milic, A.; Miller, D. W.; Mills, C.; Milov, A.; Milstead, D. A.; Minaenko, A. A.; Minami, Y.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Ming, Y.; Mir, L. M.; Mitani, T.; Mitrevski, J.; Mitsou, V. A.; Miucci, A.; Miyagawa, P. S.; Mjörnmark, J. U.; Moa, T.; Mochizuki, K.; Mohapatra, S.; Mohr, W.; Molander, S.; Moles-Valls, R.; Mönig, K.; Monini, C.; Monk, J.; Monnier, E.; Berlingen, J. Montejo; Monticelli, F.; Monzani, S.; Moore, R. W.; Morange, N.; Moreno, D.; Llácer, M. Moreno; Morettini, P.; Morgenstern, M.; Mori, D.; Morii, M.; Morinaga, M.; Morisbak, V.; Moritz, S.; Morley, A. K.; Mornacchi, G.; Morris, J. D.; Mortensen, S. S.; Morton, A.; Morvaj, L.; Mosidze, M.; Moss, J.; Motohashi, K.; Mount, R.; Mountricha, E.; Mouraviev, S. V.; Moyse, E. J. W.; Muanza, S.; Mudd, R. D.; Mueller, F.; Mueller, J.; Mueller, R. S. P.; Mueller, T.; Muenstermann, D.; Mullen, P.; Mullier, G. A.; Quijada, J. A. Murillo; Murray, W. J.; Musheghyan, H.; Musto, E.; Myagkov, A. G.; Myska, M.; Nachman, B. P.; Nackenhorst, O.; Nadal, J.; Nagai, K.; Nagai, R.; Nagai, Y.; Nagano, K.; Nagarkar, A.; Nagasaka, Y.; Nagata, K.; Nagel, M.; Nagy, E.; Nairz, A. M.; Nakahama, Y.; Nakamura, K.; Nakamura, T.; Nakano, I.; Namasivayam, H.; Garcia, R. F. Naranjo; Narayan, R.; Naumann, T.; Navarro, G.; Nayyar, R.; Neal, H. A.; Nechaeva, P. Yu.; Neep, T. J.; Nef, P. D.; Negri, A.; Negrini, M.; Nektarijevic, S.; Nellist, C.; Nelson, A.; Nemecek, S.; Nemethy, P.; Nepomuceno, A. A.; Nessi, M.; Neubauer, M. S.; Neumann, M.; Neves, R. M.; Nevski, P.; Newman, P. R.; Nguyen, D. H.; Nickerson, R. B.; Nicolaidou, R.; Nicquevert, B.; Nielsen, J.; Nikiforou, N.; Nikiforov, A.; Nikolaenko, V.; Nikolic-Audit, I.; Nikolopoulos, K.; Nilsen, J. K.; Nilsson, P.; Ninomiya, Y.; Nisati, A.; Nisius, R.; Nobe, T.; Nomachi, M.; Nomidis, I.; Nooney, T.; Norberg, S.; Nordberg, M.; Novgorodova, O.; Nowak, S.; Nozaki, M.; Nozka, L.; Ntekas, K.; Hanninger, G. Nunes; Nunnemann, T.; Nurse, E.; Nuti, F.; O'Brien, B. J.; O'grady, F.; O'Neil, D. C.; O'Shea, V.; Oakham, F. G.; Oberlack, H.; Obermann, T.; Ocariz, J.; Ochi, A.; Ochoa, I.; Ochoa-Ricoux, J. P.; Oda, S.; Odaka, S.; Ogren, H.; Oh, A.; Oh, S. H.; Ohm, C. C.; Ohman, H.; Oide, H.; Okamura, W.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olariu, A.; Pino, S. A. Olivares; Damazio, D. Oliveira; Garcia, E. Oliver; Olszewski, A.; Olszowska, J.; Onofre, A.; Onyisi, P. U. E.; Oram, C. J.; Oreglia, M. J.; Oren, Y.; Orestano, D.; Orlando, N.; Barrera, C. Oropeza; Orr, R. S.; Osculati, B.; Ospanov, R.; Garzon, G. Otero y.; Otono, H.; Ouchrif, M.; Ouellette, E. A.; Ould-Saada, F.; Ouraou, A.; Oussoren, K. P.; Ouyang, Q.; Ovcharova, A.; Owen, M.; Owen, R. E.; Ozcan, V. E.; Ozturk, N.; Pachal, K.; Pages, A. Pacheco; Aranda, C. Padilla; Pagáčová, M.; Griso, S. Pagan; Paganis, E.; Paige, F.; Pais, P.; Pajchel, K.; Palacino, G.; Palestini, S.; Palka, M.; Pallin, D.; Palma, A.; Pan, Y. B.; Panagiotopoulou, E.; Pandini, C. E.; Vazquez, J. G. Panduro; Pani, P.; Panitkin, S.; Pantea, D.; Paolozzi, L.; Papadopoulou, Th. D.; Papageorgiou, K.; Paramonov, A.; Hernandez, D. Paredes; Parker, M. A.; Parker, K. A.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pasqualucci, E.; Passaggio, S.; Pastore, F.; Pastore, Fr.; Pásztor, G.; Pataraia, S.; Patel, N. D.; Pater, J. R.; Pauly, T.; Pearce, J.; Pearson, B.; Pedersen, L. E.; Pedersen, M.; Lopez, S. Pedraza; Pedro, R.; Peleganchuk, S. V.; Pelikan, D.; Penc, O.; Peng, C.; Peng, H.; Penning, B.; Penwell, J.; Perepelitsa, D. V.; Codina, E. Perez; García-Estañ, M. T. Pérez; Perini, L.; Pernegger, H.; Perrella, S.; Peschke, R.; Peshekhonov, V. D.; Peters, K.; Peters, R. F. Y.; Petersen, B. A.; Petersen, T. C.; Petit, E.; Petridis, A.; Petridou, C.; Petroff, P.; Petrolo, E.; Petrucci, F.; Pettersson, N. E.; Pezoa, R.; Phillips, P. W.; Piacquadio, G.; Pianori, E.; Picazio, A.; Piccaro, E.; Piccinini, M.; Pickering, M. A.; Piegaia, R.; Pignotti, D. T.; Pilcher, J. E.; Pilkington, A. D.; Pina, J.; Pinamonti, M.; Pinfold, J. L.; Pingel, A.; Pinto, B.; Pires, S.; Pirumov, H.; Pitt, M.; Pizio, C.; Plazak, L.; Pleier, M.-A.; Pleskot, V.; Plotnikova, E.; Plucinski, P.; Pluth, D.; Poettgen, R.; Poggioli, L.; Pohl, D.; Polesello, G.; Poley, A.; Policicchio, A.; Polifka, R.; Polini, A.; Pollard, C. S.; Polychronakos, V.; Pommès, K.; Pontecorvo, L.; Pope, B. G.; Popeneciu, G. A.; Popovic, D. S.; Poppleton, A.; Pospisil, S.; Potamianos, K.; Potrap, I. N.; Potter, C. J.; Potter, C. T.; Poulard, G.; Poveda, J.; Pozdnyakov, V.; Pralavorio, P.; Pranko, A.; Prasad, S.; Prell, S.; Price, D.; Price, L. E.; Primavera, M.; Prince, S.; Proissl, M.; Prokofiev, K.; Prokoshin, F.; Protopapadaki, E.; Protopopescu, S.; Proudfoot, J.; Przybycien, M.; Ptacek, E.; Puddu, D.; Pueschel, E.; Puldon, D.; Purohit, M.; Puzo, P.; Qian, J.; Qin, G.; Qin, Y.; Quadt, A.; Quarrie, D. R.; Quayle, W. B.; Queitsch-Maitland, M.; Quilty, D.; Raddum, S.; Radeka, V.; Radescu, V.; Radhakrishnan, S. K.; Radloff, P.; Rados, P.; Ragusa, F.; Rahal, G.; Rajagopalan, S.; Rammensee, M.; Rangel-Smith, C.; Rauscher, F.; Rave, S.; Ravenscroft, T.; Raymond, M.; Read, A. L.; Readioff, N. P.; Rebuzzi, D. M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reeves, K.; Rehnisch, L.; Reichert, J.; Reisin, H.; Relich, M.; Rembser, C.; Ren, H.; Renaud, A.; Rescigno, M.; Resconi, S.; Rezanova, O. L.; Reznicek, P.; Rezvani, R.; Richter, R.; Richter, S.; Richter-Was, E.; Ricken, O.; Ridel, M.; Rieck, P.; Riegel, C. J.; Rieger, J.; Rijssenbeek, M.; Rimoldi, A.; Rinaldi, L.; Ristić, B.; Ritsch, E.; Riu, I.; Rizatdinova, F.; Rizvi, E.; Robertson, S. H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, J. E. M.; Robson, A.; Roda, C.; Roe, S.; Røhne, O.; Rolli, S.; Romaniouk, A.; Romano, M.; Saez, S. M. Romano; Adam, E. Romero; Rompotis, N.; Ronzani, M.; Roos, L.; Ros, E.; Rosati, S.; Rosbach, K.; Rose, P.; Rosendahl, P. L.; Rosenthal, O.; Rossetti, V.; Rossi, E.; Rossi, L. P.; Rosten, R.; Rotaru, M.; Roth, I.; Rothberg, J.; Rousseau, D.; Royon, C. R.; Rozanov, A.; Rozen, Y.; Ruan, X.; Rubbo, F.; Rubinskiy, I.; Rud, V. I.; Rudolph, C.; Rudolph, M. S.; Rühr, F.; Ruiz-Martinez, A.; Rurikova, Z.; Rusakovich, N. A.; Ruschke, A.; Russell, H. L.; Rutherfoord, J. P.; Ruthmann, N.; Ryabov, Y. F.; Rybar, M.; Rybkin, G.; Ryder, N. C.; Saavedra, A. F.; Sabato, G.; Sacerdoti, S.; Saddique, A.; Sadrozinski, H. F.-W.; Sadykov, R.; Tehrani, F. Safai; Sahinsoy, M.; Saimpert, M.; Saito, T.; Sakamoto, H.; Sakurai, Y.; Salamanna, G.; Salamon, A.; Saleem, M.; Salek, D.; De Bruin, P. H. Sales; Salihagic, D.; Salnikov, A.; Salt, J.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sammel, D.; Sampsonidis, D.; Sanchez, A.; Sánchez, J.; Martinez, V. Sanchez; Sandaker, H.; Sandbach, R. L.; Sander, H. G.; Sanders, M. P.; Sandhoff, M.; Sandoval, C.; Sandstroem, R.; Sankey, D. P. C.; Sannino, M.; Sansoni, A.; Santoni, C.; Santonico, R.; Santos, H.; Castillo, I. Santoyo; Sapp, K.; Sapronov, A.; Saraiva, J. G.; Sarrazin, B.; Sasaki, O.; Sasaki, Y.; Sato, K.; Sauvage, G.; Sauvan, E.; Savage, G.; Savard, P.; Sawyer, C.; Sawyer, L.; Saxon, J.; Sbarra, C.; Sbrizzi, A.; Scanlon, T.; Scannicchio, D. A.; Scarcella, M.; Scarfone, V.; Schaarschmidt, J.; Schacht, P.; Schaefer, D.; Schaefer, R.; Schaeffer, J.; Schaepe, S.; Schaetzel, S.; Schäfer, U.; Schaffer, A. C.; Schaile, D.; Schamberger, R. D.; Scharf, V.; Schegelsky, V. A.; Scheirich, D.; Schernau, M.; Schiavi, C.; Schillo, C.; Schioppa, M.; Schlenker, S.; Schmidt, E.; Schmieden, K.; Schmitt, C.; Schmitt, S.; Schmitt, S.; Schneider, B.; Schnellbach, Y. J.; Schnoor, U.; Schoeffel, L.; Schoening, A.; Schoenrock, B. D.; Schopf, E.; Schorlemmer, A. L. S.; Schott, M.; Schouten, D.; Schovancova, J.; Schramm, S.; Schreyer, M.; Schroeder, C.; Schuh, N.; Schultens, M. J.; Schultz-Coulon, H.-C.; Schulz, H.; Schumacher, M.; Schumm, B. A.; Schune, Ph.; Schwanenberger, C.; Schwartzman, A.; Schwarz, T. A.; Schwegler, Ph.; Schweiger, H.; Schwemling, Ph.; Schwienhorst, R.; Schwindling, J.; Schwindt, T.; Sciacca, F. G.; Scifo, E.; Sciolla, G.; Scuri, F.; Scutti, F.; Searcy, J.; Sedov, G.; Sedykh, E.; Seema, P.; Seidel, S. C.; Seiden, A.; Seifert, F.; Seixas, J. M.; Sekhniaidze, G.; Sekhon, K.; Sekula, S. J.; Seliverstov, D. M.; Semprini-Cesari, N.; Serfon, C.; Serin, L.; Serkin, L.; Serre, T.; Sessa, M.; Seuster, R.; Severini, H.; Sfiligoj, T.; Sforza, F.; Sfyrla, A.; Shabalina, E.; Shamim, M.; Shan, L. Y.; Shang, R.; Shank, J. T.; Shapiro, M.; Shatalov, P. B.; Shaw, K.; Shaw, S. M.; Shcherbakova, A.; Shehu, C. Y.; Sherwood, P.; Shi, L.; Shimizu, S.; Shimmin, C. O.; Shimojima, M.; Shiyakova, M.; Shmeleva, A.; Saadi, D. Shoaleh; Shochet, M. J.; Shojaii, S.; Shrestha, S.; Shulga, E.; Shupe, M. A.; Shushkevich, S.; Sicho, P.; Sidebo, P. E.; Sidiropoulou, O.; Sidorov, D.; Sidoti, A.; Siegert, F.; Sijacki, Dj.; Silva, J.; Silver, Y.; Silverstein, S. B.; Simak, V.; Simard, O.; Simic, Lj.; Simion, S.; Simioni, E.; Simmons, B.; Simon, D.; Simoniello, R.; Sinervo, P.; Sinev, N. B.; Sioli, M.; Siragusa, G.; Sisakyan, A. N.; Sivoklokov, S. Yu.; Sjölin, J.; Sjursen, T. B.; Skinner, M. B.; Skottowe, H. P.; Skubic, P.; Slater, M.; Slavicek, T.; Slawinska, M.; Sliwa, K.; Smakhtin, V.; Smart, B. H.; Smestad, L.; Smirnov, S. Yu.; Smirnov, Y.; Smirnova, L. N.; Smirnova, O.; Smith, M. N. K.; Smith, R. W.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snidero, G.; Snyder, S.; Sobie, R.; Socher, F.; Soffer, A.; Soh, D. A.; Solans, C. A.; Solar, M.; Solc, J.; Soldatov, E. Yu.; Soldevila, U.; Solodkov, A. A.; Soloshenko, A.; Solovyanov, O. V.; Solovyev, V.; Sommer, P.; Song, H. Y.; Soni, N.; Sood, A.; Sopczak, A.; Sopko, B.; Sopko, V.; Sorin, V.; Sosa, D.; Sosebee, M.; Sotiropoulou, C. L.; Soualah, R.; Soukharev, A. M.; South, D.; Sowden, B. C.; Spagnolo, S.; Spalla, M.; Spanò, F.; Spearman, W. R.; Sperlich, D.; Spettel, F.; Spighi, R.; Spigo, G.; Spiller, L. A.; Spousta, M.; Spreitzer, T.; St. Denis, R. D.; Staerz, S.; Stahlman, J.; Stamen, R.; Stamm, S.; Stanecka, E.; Stanescu, C.; Stanescu-Bellu, M.; Stanitzki, M. M.; Stapnes, S.; Starchenko, E. A.; Stark, J.; Staroba, P.; Starovoitov, P.; Staszewski, R.; Stavina, P.; Steinberg, P.; Stelzer, B.; Stelzer, H. J.; Stelzer-Chilton, O.; Stenzel, H.; Stewart, G. A.; Stillings, J. A.; Stockton, M. C.; Stoebe, M.; Stoicea, G.; Stolte, P.; Stonjek, S.; Stradling, A. R.; Straessner, A.; Stramaglia, M. E.; Strandberg, J.; Strandberg, S.; Strandlie, A.; Strauss, E.; Strauss, M.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Stroynowski, R.; Strubig, A.; Stucci, S. A.; Stugu, B.; Styles, N. A.; Su, D.; Su, J.; Subramaniam, R.; Succurro, A.; Sugaya, Y.; Suhr, C.; Suk, M.; Sulin, V. V.; Sultansoy, S.; Sumida, T.; Sun, S.; Sun, X.; Sundermann, J. E.; Suruliz, K.; Susinno, G.; Sutton, M. R.; Suzuki, S.; Svatos, M.; Swedish, S.; Swiatlowski, M.; Sykora, I.; Sykora, T.; Ta, D.; Taccini, C.; Tackmann, K.; Taenzer, J.; Taffard, A.; Tafirout, R.; Taiblum, N.; Takai, H.; Takashima, R.; Takeda, H.; Takeshita, T.; Takubo, Y.; Talby, M.; Talyshev, A. A.; Tam, J. Y. C.; Tan, K. G.; Tanaka, J.; Tanaka, R.; Tanaka, S.; Tannenwald, B. B.; Tannoury, N.; Tapprogge, S.; Tarem, S.; Tarrade, F.; Tartarelli, G. F.; Tas, P.; Tasevsky, M.; Tashiro, T.; Tassi, E.; Delgado, A. Tavares; Tayalati, Y.; Taylor, F. E.; Taylor, G. N.; Taylor, W.; Teischinger, F. A.; Castanheira, M. Teixeira Dias; Teixeira-Dias, P.; Temming, K. K.; Kate, H. Ten; Teng, P. K.; Teoh, J. J.; Tepel, F.; Terada, S.; Terashi, K.; Terron, J.; Terzo, S.; Testa, M.; Teuscher, R. J.; Theveneaux-Pelzer, T.; Thomas, J. P.; Thomas-Wilsker, J.; Thompson, E. N.; Thompson, P. D.; Thompson, R. J.; Thompson, A. S.; Thomsen, L. A.; Thomson, E.; Thomson, M.; Thun, R. P.; Tibbetts, M. J.; Torres, R. E. Ticse; Tikhomirov, V. O.; Tikhonov, Yu. A.; Timoshenko, S.; Tiouchichine, E.; Tipton, P.; Tisserant, S.; Todome, K.; Todorov, T.; Todorova-Nova, S.; Tojo, J.; Tokár, S.; Tokushuku, K.; Tollefson, K.; Tolley, E.; Tomlinson, L.; Tomoto, M.; Tompkins, L.; Toms, K.; Torrence, E.; Torres, H.; Pastor, E. Torró; Toth, J.; Touchard, F.; Tovey, D. R.; Trefzger, T.; Tremblet, L.; Tricoli, A.; Trigger, I. M.; Trincaz-Duvoid, S.; Tripiana, M. F.; Trischuk, W.; Trocmé, B.; Troncon, C.; Trottier-McDonald, M.; Trovatelli, M.; True, P.; Truong, L.; Trzebinski, M.; Trzupek, A.; Tsarouchas, C.; Tseng, J. C.-L.; Tsiareshka, P. V.; Tsionou, D.; Tsipolitis, G.; Tsirintanis, N.; Tsiskaridze, S.; Tsiskaridze, V.; Tskhadadze, E. G.; Tsukerman, I. I.; Tsulaia, V.; Tsuno, S.; Tsybychev, D.; Tudorache, A.; Tudorache, V.; Tuna, A. N.; Tupputi, S. A.; Turchikhin, S.; Turecek, D.; Turra, R.; Turvey, A. J.; Tuts, P. M.; Tykhonov, A.; Tylmad, M.; Tyndel, M.; Ueda, I.; Ueno, R.; Ughetto, M.; Ugland, M.; Uhlenbrock, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Ungaro, F. C.; Unno, Y.; Unverdorben, C.; Urban, J.; Urquijo, P.; Urrejola, P.; Usai, G.; Usanova, A.; Vacavant, L.; Vacek, V.; Vachon, B.; Valderanis, C.; Valencic, N.; Valentinetti, S.; Valero, A.; Valery, L.; Valkar, S.; Gallego, E. Valladolid; Vallecorsa, S.; Ferrer, J. A. Valls; Van Den Wollenberg, W.; Van Der Deijl, P. C.; van der Geer, R.; van der Graaf, H.; Van Der Leeuw, R.; van Eldik, N.; van Gemmeren, P.; Van Nieuwkoop, J.; van Vulpen, I.; van Woerden, M. C.; Vanadia, M.; Vandelli, W.; Vanguri, R.; Vaniachine, A.; Vannucci, F.; Vardanyan, G.; Vari, R.; Varnes, E. W.; Varol, T.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vassilakopoulos, V. I.; Vazeille, F.; Schroeder, T. Vazquez; Veatch, J.; Veloce, L. M.; Veloso, F.; Velz, T.; Veneziano, S.; Ventura, A.; Ventura, D.; Venturi, M.; Venturi, N.; Venturini, A.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, J. C.; Vest, A.; Vetterli, M. C.; Viazlo, O.; Vichou, I.; Vickey, T.; Boeriu, O. E. Vickey; Viehhauser, G. H. A.; Viel, S.; Vigne, R.; Villa, M.; Perez, M. Villaplana; Vilucchi, E.; Vincter, M. G.; Vinogradov, V. B.; Vivarelli, I.; Vaque, F. Vives; Vlachos, S.; Vladoiu, D.; Vlasak, M.; Vogel, M.; Vokac, P.; Volpi, G.; Volpi, M.; von der Schmitt, H.; von Radziewski, H.; von Toerne, E.; Vorobel, V.; Vorobev, K.; Vos, M.; Voss, R.; Vossebeld, J. H.; Vranjes, N.; Milosavljevic, M. Vranjes; Vrba, V.; Vreeswijk, M.; Vuillermet, R.; Vukotic, I.; Vykydal, Z.; Wagner, P.; Wagner, W.; Wahlberg, H.; Wahrmund, S.; Wakabayashi, J.; Walder, J.; Walker, R.; Walkowiak, W.; Wang, C.; Wang, F.; Wang, H.; Wang, H.; Wang, J.; Wang, J.; Wang, K.; Wang, R.; Wang, S. M.; Wang, T.; Wang, T.; Wang, X.; Wanotayaroj, C.; Warburton, A.; Ward, C. P.; Wardrope, D. R.; Warsinsky, M.; Washbrook, A.; Wasicki, C.; Watkins, P. M.; Watson, A. T.; Watson, I. J.; Watson, M. F.; Watts, G.; Watts, S.; Waugh, B. M.; Webb, S.; Weber, M. S.; Weber, S. W.; Webster, J. S.; Weidberg, A. R.; Weinert, B.; Weingarten, J.; Weiser, C.; Weits, H.; Wells, P. S.; Wenaus, T.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M.; Werner, P.; Wessels, M.; Wetter, J.; Whalen, K.; Wharton, A. M.; White, A.; White, M. J.; White, R.; White, S.; Whiteson, D.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wienemann, P.; Wiglesworth, C.; Wiik-Fuchs, L. A. M.; Wildauer, A.; Wilkens, H. G.; Williams, H. H.; Williams, S.; Willis, C.; Willocq, S.; Wilson, A.; Wilson, J. A.; Wingerter-Seez, I.; Winklmeier, F.; Winter, B. T.; Wittgen, M.; Wittkowski, J.; Wollstadt, S. J.; Wolter, M. W.; Wolters, H.; Wosiek, B. K.; Wotschack, J.; Woudstra, M. J.; Wozniak, K. W.; Wu, M.; Wu, M.; Wu, S. L.; Wu, X.; Wu, Y.; Wyatt, T. R.; Wynne, B. M.; Xella, S.; Xu, D.; Xu, L.; Yabsley, B.; Yacoob, S.; Yakabe, R.; Yamada, M.; Yamaguchi, Y.; Yamamoto, A.; Yamamoto, S.; Yamanaka, T.; Yamauchi, K.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, H.; Yang, Y.; Yao, W.-M.; Yasu, Y.; Yatsenko, E.; Wong, K. H. Yau; Ye, J.; Ye, S.; Yeletskikh, I.; Yen, A. L.; Yildirim, E.; Yorita, K.; Yoshida, R.; Yoshihara, K.; Young, C.; Young, C. J. S.; Youssef, S.; Yu, D. R.; Yu, J.; Yu, J. M.; Yu, J.; Yuan, L.; Yuen, S. P. Y.; Yurkewicz, A.; Yusuff, I.; Zabinski, B.; Zaidan, R.; Zaitsev, A. M.; Zalieckas, J.; Zaman, A.; Zambito, S.; Zanello, L.; Zanzi, D.; Zeitnitz, C.; Zeman, M.; Zemla, A.; Zengel, K.; Zenin, O.; Ženiš, T.; Zerwas, D.; Zhang, D.; Zhang, F.; Zhang, H.; Zhang, J.; Zhang, L.; Zhang, R.; Zhang, X.; Zhang, Z.; Zhao, X.; Zhao, Y.; Zhao, Z.; Zhemchugov, A.; Zhong, J.; Zhou, B.; Zhou, C.; Zhou, L.; Zhou, L.; Zhou, N.; Zhu, C. G.; Zhu, H.; Zhu, J.; Zhu, Y.; Zhuang, X.; Zhukov, K.; Zibell, A.; Zieminska, D.; Zimine, N. I.; Zimmermann, C.; Zimmermann, S.; Zinonos, Z.; Zinser, M.; Ziolkowski, M.; Živković, L.; Zobernig, G.; Zoccoli, A.; Nedden, M. zur; Zurzolo, G.; Zwalinski, L.

    2016-11-01

    The large rate of multiple simultaneous proton-proton interactions, or pile-up, generated by the Large Hadron Collider in Run 1 required the development of many new techniques to mitigate the adverse effects of these conditions. This paper describes the methods employed in the ATLAS experiment to correct for the impact of pile-up on jet energy and jet shapes, and for the presence of spurious additional jets, with a primary focus on the large 20.3 fb^{-1} data sample collected at a centre-of-mass energy of √{s} = 8 TeV. The energy correction techniques that incorporate sophisticated estimates of the average pile-up energy density and tracking information are presented. Jet-to-vertex association techniques are discussed and projections of performance for the future are considered. Lastly, the extension of these techniques to mitigate the effect of pile-up on jet shapes using subtraction and grooming procedures is presented.

  17. Performance of pile-up mitigation techniques for jets in pp collisions at √s=8 TeV using the ATLAS detector

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2016-10-27

    The large rate of multiple simultaneous proton–proton interactions, or pile-up, generated by the Large Hadron Collider in Run 1 required the development of many new techniques to mitigate the adverse effects of these conditions. This paper describes the methods employed in the ATLAS experiment to correct for the impact of pile-up on jet energy and jet shapes, and for the presence of spurious additional jets, with a primary focus on the large 20.3 fb -1 data sample collected at a centre-of-mass energy ofmore » $$\\sqrt{s}$$ = 8TeV. The energy correction techniques that incorporate sophisticated estimates of the average pile-up energy density and tracking information are presented. Jet-to-vertex association techniques are discussed and projections of performance for the future are considered. Lastly, the extension of these techniques to mitigate the effect of pile-up on jet shapes using subtraction and grooming procedures is presented.« less

  18. Performance of pile-up mitigation techniques for jets in [Formula: see text] collisions at [Formula: see text] TeV using the ATLAS detector.

    PubMed

    Aad, G; Abbott, B; Abdallah, J; Abdinov, O; Aben, R; Abolins, M; AbouZeid, O S; Abramowicz, H; Abreu, H; Abreu, R; Abulaiti, Y; Acharya, B S; Adamczyk, L; Adams, D L; Adelman, J; Adomeit, S; Adye, T; Affolder, A A; Agatonovic-Jovin, T; Agricola, J; Aguilar-Saavedra, J A; Ahlen, S P; Ahmadov, F; Aielli, G; Akerstedt, H; Åkesson, T P A; Akimov, A V; Alberghi, G L; Albert, J; Albrand, S; Verzini, M J Alconada; Aleksa, M; Aleksandrov, I N; Alexa, C; Alexander, G; Alexopoulos, T; Alhroob, M; Alimonti, G; Alio, L; Alison, J; Alkire, S P; Allbrooke, B M M; Allport, P P; Aloisio, A; Alonso, A; Alonso, F; Alpigiani, C; Altheimer, A; Gonzalez, B Alvarez; Piqueras, D Álvarez; Alviggi, M G; Amadio, B T; Amako, K; Coutinho, Y Amaral; Amelung, C; Amidei, D; Dos Santos, S P Amor; Amorim, A; Amoroso, S; Amram, N; Amundsen, G; Anastopoulos, C; Ancu, L S; Andari, N; Andeen, T; Anders, C F; Anders, G; Anders, J K; Anderson, K J; Andreazza, A; Andrei, V; Angelidakis, S; Angelozzi, I; Anger, P; Angerami, A; Anghinolfi, F; Anisenkov, A V; Anjos, N; Annovi, A; Antonelli, M; Antonov, A; Antos, J; Anulli, F; Aoki, M; Bella, L Aperio; Arabidze, G; Arai, Y; Araque, J P; Arce, A T H; Arduh, F A; Arguin, J-F; Argyropoulos, S; Arik, M; Armbruster, A J; Arnaez, O; Arnal, V; Arnold, H; Arratia, M; Arslan, O; Artamonov, A; Artoni, G; Asai, S; Asbah, N; Ashkenazi, A; Åsman, B; Asquith, L; Assamagan, K; Astalos, R; Atkinson, M; Atlay, N B; Augsten, K; Aurousseau, M; Avolio, G; Axen, B; Ayoub, M K; Azuelos, G; Baak, M A; Baas, A E; Baca, M J; Bacci, C; Bachacou, H; Bachas, K; Backes, M; Backhaus, M; Bagiacchi, P; Bagnaia, P; Bai, Y; Bain, T; Baines, J T; Baker, O K; Baldin, E M; Balek, P; Balestri, T; Balli, F; Banas, E; Banerjee, Sw; Bannoura, A A E; Bansil, H S; Barak, L; Barberio, E L; Barberis, D; Barbero, M; Barillari, T; Barisonzi, M; Barklow, T; Barlow, N; Barnes, S L; Barnett, B M; Barnett, R M; Barnovska, Z; Baroncelli, A; Barone, G; Barr, A J; Barreiro, F; da Costa, J Barreiro Guimarães; Bartoldus, R; Barton, A E; Bartos, P; Basalaev, A; Bassalat, A; Basye, A; Bates, R L; Batista, S J; Batley, J R; Battaglia, M; Bauce, M; Bauer, F; Bawa, H S; Beacham, J B; Beattie, M D; Beau, T; Beauchemin, P H; Beccherle, R; Bechtle, P; Beck, H P; Becker, K; Becker, M; Becker, S; Beckingham, M; Becot, C; Beddall, A J; Beddall, A; Bednyakov, V A; Bee, C P; Beemster, L J; Beermann, T A; Begel, M; Behr, J K; Belanger-Champagne, C; Bell, W H; Bella, G; Bellagamba, L; Bellerive, A; Bellomo, M; Belotskiy, K; Beltramello, O; Benary, O; Benchekroun, D; Bender, M; Bendtz, K; Benekos, N; Benhammou, Y; Noccioli, E Benhar; Garcia, J A Benitez; Benjamin, D P; Bensinger, J R; Bentvelsen, S; Beresford, L; Beretta, M; Berge, D; Kuutmann, E Bergeaas; Berger, N; Berghaus, F; Beringer, J; Bernard, C; Bernard, N R; Bernius, C; Bernlochner, F U; Berry, T; Berta, P; Bertella, C; Bertoli, G; Bertolucci, F; Bertsche, C; Bertsche, D; Besana, M I; Besjes, G J; Bylund, O Bessidskaia; Bessner, M; Besson, N; Betancourt, C; Bethke, S; Bevan, A J; Bhimji, W; Bianchi, R M; Bianchini, L; Bianco, M; Biebel, O; Biedermann, D; Bieniek, S P; Biglietti, M; De Mendizabal, J Bilbao; Bilokon, H; Bindi, M; Binet, S; Bingul, A; Bini, C; Biondi, S; Black, C W; Black, J E; Black, K M; Blackburn, D; Blair, R E; Blanchard, J-B; Blanco, J E; Blazek, T; Bloch, I; Blocker, C; Blum, W; Blumenschein, U; Bobbink, G J; Bobrovnikov, V S; Bocchetta, S S; Bocci, A; Bock, C; Boehler, M; Bogaerts, J A; Bogavac, D; Bogdanchikov, A G; Bohm, C; Boisvert, V; Bold, T; Boldea, V; Boldyrev, A S; Bomben, M; Bona, M; Boonekamp, M; Borisov, A; Borissov, G; Borroni, S; Bortfeldt, J; Bortolotto, V; Bos, K; Boscherini, D; Bosman, M; Boudreau, J; Bouffard, J; Bouhova-Thacker, E V; Boumediene, D; Bourdarios, C; Bousson, N; Boveia, A; Boyd, J; Boyko, I R; Bozic, I; Bracinik, J; Brandt, A; Brandt, G; Brandt, O; Bratzler, U; Brau, B; Brau, J E; Braun, H M; Brazzale, S F; Madden, W D Breaden; Brendlinger, K; Brennan, A J; Brenner, L; Brenner, R; Bressler, S; Bristow, K; Bristow, T M; Britton, D; Britzger, D; Brochu, F M; Brock, I; Brock, R; Bronner, J; Brooijmans, G; Brooks, T; Brooks, W K; Brosamer, J; Brost, E; Brown, J; de Renstrom, P A Bruckman; Bruncko, D; Bruneliere, R; Bruni, A; Bruni, G; Bruschi, M; Bruscino, N; Bryngemark, L; Buanes, T; Buat, Q; Buchholz, P; Buckley, A G; Buda, S I; Budagov, I A; Buehrer, F; Bugge, L; Bugge, M K; Bulekov, O; Bullock, D; Burckhart, H; Burdin, S; Burghgrave, B; Burke, S; Burmeister, I; Busato, E; Büscher, D; Büscher, V; Bussey, P; Butler, J M; Butt, A I; Buttar, C M; Butterworth, J M; Butti, P; Buttinger, W; Buzatu, A; Buzykaev, A R; Urbán, S Cabrera; Caforio, D; Cairo, V M; Cakir, O; Calace, N; Calafiura, P; Calandri, A; Calderini, G; Calfayan, P; Caloba, L P; Calvet, D; Calvet, S; Toro, R Camacho; Camarda, S; Camarri, P; Cameron, D; Armadans, R Caminal; Campana, S; Campanelli, M; Campoverde, A; Canale, V; Canepa, A; Bret, M Cano; Cantero, J; Cantrill, R; Cao, T; Garrido, M D M Capeans; Caprini, I; Caprini, M; Capua, M; Caputo, R; Cardarelli, R; Cardillo, F; Carli, T; Carlino, G; Carminati, L; Caron, S; Carquin, E; Carrillo-Montoya, G D; Carter, J R; Carvalho, J; Casadei, D; Casado, M P; Casolino, M; Castaneda-Miranda, E; Castelli, A; Gimenez, V Castillo; Castro, N F; Catastini, P; Catinaccio, A; Catmore, J R; Cattai, A; Caudron, J; Cavaliere, V; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Ceradini, F; Cerio, B C; Cerny, K; Cerqueira, A S; Cerri, A; Cerrito, L; Cerutti, F; Cerv, M; Cervelli, A; Cetin, S A; Chafaq, A; Chakraborty, D; Chalupkova, I; Chang, P; Chapman, J D; Charlton, D G; Chau, C C; Barajas, C A Chavez; Cheatham, S; Chegwidden, A; Chekanov, S; Chekulaev, S V; Chelkov, G A; Chelstowska, M A; Chen, C; Chen, H; Chen, K; Chen, L; Chen, S; Chen, X; Chen, Y; Cheng, H C; Cheng, Y; Cheplakov, A; Cheremushkina, E; El Moursli, R Cherkaoui; Chernyatin, V; Cheu, E; Chevalier, L; Chiarella, V; Chiarelli, G; Childers, J T; Chiodini, G; Chisholm, A S; Chislett, R T; Chitan, A; Chizhov, M V; Choi, K; Chouridou, S; Chow, B K B; Christodoulou, V; Chromek-Burckhart, D; Chudoba, J; Chuinard, A J; Chwastowski, J J; Chytka, L; Ciapetti, G; Ciftci, A K; Cinca, D; Cindro, V; Cioara, I A; Ciocio, A; Citron, Z H; Ciubancan, M; Clark, A; Clark, B L; Clark, P J; Clarke, R N; Cleland, W; Clement, C; Coadou, Y; Cobal, M; Coccaro, A; Cochran, J; Coffey, L; Cogan, J G; Colasurdo, L; Cole, B; Cole, S; Colijn, A P; Collot, J; Colombo, T; Compostella, G; Muiño, P Conde; Coniavitis, E; Connell, S H; Connelly, I A; Consonni, S M; Consorti, V; Constantinescu, S; Conta, C; Conti, G; Conventi, F; Cooke, M; Cooper, B D; Cooper-Sarkar, A M; Cornelissen, T; Corradi, M; Corriveau, F; Corso-Radu, A; Cortes-Gonzalez, A; Cortiana, G; Costa, G; Costa, M J; Costanzo, D; Côté, D; Cottin, G; Cowan, G; Cox, B E; Cranmer, K; Cree, G; Crépé-Renaudin, S; Crescioli, F; Cribbs, W A; Ortuzar, M Crispin; Cristinziani, M; Croft, V; Crosetti, G; Donszelmann, T Cuhadar; Cummings, J; Curatolo, M; Cuthbert, C; Czirr, H; Czodrowski, P; D'Auria, S; D'Onofrio, M; De Sousa, M J Da Cunha Sargedas; Via, C Da; Dabrowski, W; Dafinca, A; Dai, T; Dale, O; Dallaire, F; Dallapiccola, C; Dam, M; Dandoy, J R; Dang, N P; Daniells, A C; Danninger, M; Hoffmann, M Dano; Dao, V; Darbo, G; Darmora, S; Dassoulas, J; Dattagupta, A; Davey, W; David, C; Davidek, T; Davies, E; Davies, M; Davison, P; Davygora, Y; Dawe, E; Dawson, I; Daya-Ishmukhametova, R K; De, K; de Asmundis, R; De Benedetti, A; De Castro, S; De Cecco, S; De Groot, N; de Jong, P; De la Torre, H; De Lorenzi, F; De Nooij, L; De Pedis, D; De Salvo, A; De Sanctis, U; De Santo, A; De Regie, J B De Vivie; Dearnaley, W J; Debbe, R; Debenedetti, C; Dedovich, D V; Deigaard, I; Del Peso, J; Del Prete, T; Delgove, D; Deliot, F; Delitzsch, C M; Deliyergiyev, M; Dell'Acqua, A; Dell'Asta, L; Dell'Orso, M; Della Pietra, M; Della Volpe, D; Delmastro, M; Delsart, P A; Deluca, C; DeMarco, D A; Demers, S; Demichev, M; Demilly, A; Denisov, S P; Derendarz, D; Derkaoui, J E; Derue, F; Dervan, P; Desch, K; Deterre, C; Deviveiros, P O; Dewhurst, A; Dhaliwal, S; Di Ciaccio, A; Di Ciaccio, L; Di Domenico, A; Di Donato, C; Di Girolamo, A; Di Girolamo, B; Di Mattia, A; Di Micco, B; Di Nardo, R; Di Simone, A; Di Sipio, R; Di Valentino, D; Diaconu, C; Diamond, M; Dias, F A; Diaz, M A; Diehl, E B; Dietrich, J; Diglio, S; Dimitrievska, A; Dingfelder, J; Dita, P; Dita, S; Dittus, F; Djama, F; Djobava, T; Djuvsland, J I; do Vale, M A B; Dobos, D; Dobre, M; Doglioni, C; Dohmae, T; Dolejsi, J; Dolezal, Z; Dolgoshein, B A; Donadelli, M; Donati, S; Dondero, P; Donini, J; Dopke, J; Doria, A; Dova, M T; Doyle, A T; Drechsler, E; Dris, M; Dubreuil, E; Duchovni, E; Duckeck, G; Ducu, O A; Duda, D; Dudarev, A; Duflot, L; Duguid, L; Dührssen, M; Dunford, M; Yildiz, H Duran; Düren, M; Durglishvili, A; Duschinger, D; Dyndal, M; Eckardt, C; Ecker, K M; Edgar, R C; Edson, W; Edwards, N C; Ehrenfeld, W; Eifert, T; Eigen, G; Einsweiler, K; Ekelof, T; Kacimi, M El; Ellert, M; Elles, S; Ellinghaus, F; Elliot, A A; Ellis, N; Elmsheuser, J; Elsing, M; Emeliyanov, D; Enari, Y; Endner, O C; Endo, M; Erdmann, J; Ereditato, A; Ernis, G; Ernst, J; Ernst, M; Errede, S; Ertel, E; Escalier, M; Esch, H; Escobar, C; Esposito, B; Etienvre, A I; Etzion, E; Evans, H; Ezhilov, A; Fabbri, L; Facini, G; Fakhrutdinov, R M; Falciano, S; Falla, R J; Faltova, J; Fang, Y; Fanti, M; Farbin, A; Farilla, A; Farooque, T; Farrell, S; Farrington, S M; Farthouat, P; Fassi, F; Fassnacht, P; Fassouliotis, D; Giannelli, M Faucci; Favareto, A; Fayard, L; Federic, P; Fedin, O L; Fedorko, W; Feigl, S; Feligioni, L; Feng, C; Feng, E J; Feng, H; Fenyuk, A B; Feremenga, L; Martinez, P Fernandez; Perez, S Fernandez; Ferrando, J; Ferrari, A; Ferrari, P; Ferrari, R; de Lima, D E Ferreira; Ferrer, A; Ferrere, D; Ferretti, C; Parodi, A Ferretto; Fiascaris, M; Fiedler, F; Filipčič, A; Filipuzzi, M; Filthaut, F; Fincke-Keeler, M; Finelli, K D; Fiolhais, M C N; Fiorini, L; Firan, A; Fischer, A; Fischer, C; Fischer, J; Fisher, W C; Fitzgerald, E A; Flaschel, N; Fleck, I; Fleischmann, P; Fleischmann, S; Fletcher, G T; Fletcher, G; Fletcher, R R M; Flick, T; Floderus, A; Castillo, L R Flores; Flowerdew, M J; Formica, A; Forti, A; Fournier, D; Fox, H; Fracchia, S; Francavilla, P; Franchini, M; Francis, D; Franconi, L; Franklin, M; Frate, M; Fraternali, M; Freeborn, D; French, S T; Friedrich, F; Froidevaux, D; Frost, J A; Fukunaga, C; Torregrosa, E Fullana; Fulsom, B G; Fusayasu, T; Fuster, J; Gabaldon, C; Gabizon, O; Gabrielli, A; Gabrielli, A; Gach, G P; Gadatsch, S; Gadomski, S; Gagliardi, G; Gagnon, P; Galea, C; Galhardo, B; Gallas, E J; Gallop, B J; Gallus, P; Galster, G; Gan, K K; Gao, J; Gao, Y; Gao, Y S; Walls, F M Garay; Garberson, F; García, C; Navarro, J E García; Garcia-Sciveres, M; Gardner, R W; Garelli, N; Garonne, V; Gatti, C; Gaudiello, A; Gaudio, G; Gaur, B; Gauthier, L; Gauzzi, P; Gavrilenko, I L; Gay, C; Gaycken, G; Gazis, E N; Ge, P; Gecse, Z; Gee, C N P; Geerts, D A A; Geich-Gimbel, Ch; Geisler, M P; Gemme, C; Genest, M H; Gentile, S; George, M; George, S; Gerbaudo, D; Gershon, A; Ghasemi, S; Ghazlane, H; Giacobbe, B; Giagu, S; Giangiobbe, V; Giannetti, P; Gibbard, B; Gibson, S M; Gilchriese, M; Gillam, T P S; Gillberg, D; Gilles, G; Gingrich, D M; Giokaris, N; Giordani, M P; Giorgi, F M; Giorgi, F M; Giraud, P F; Giromini, P; Giugni, D; Giuliani, C; Giulini, M; Gjelsten, B K; Gkaitatzis, S; Gkialas, I; Gkougkousis, E L; Gladilin, L K; Glasman, C; Glatzer, J; Glaysher, P C F; Glazov, A; Goblirsch-Kolb, M; Goddard, J R; Godlewski, J; Goldfarb, S; Golling, T; Golubkov, D; Gomes, A; Gonçalo, R; Costa, J Goncalves Pinto Firmino Da; Gonella, L; de la Hoz, S González; Parra, G Gonzalez; Gonzalez-Sevilla, S; Goossens, L; Gorbounov, P A; Gordon, H A; Gorelov, I; Gorini, B; Gorini, E; Gorišek, A; Gornicki, E; Goshaw, A T; Gössling, C; Gostkin, M I; Goujdami, D; Goussiou, A G; Govender, N; Gozani, E; Grabas, H M X; Graber, L; Grabowska-Bold, I; Gradin, P O J; Grafström, P; Grahn, K-J; Gramling, J; Gramstad, E; Grancagnolo, S; Grassi, V; Gratchev, V; Gray, H M; Graziani, E; Greenwood, Z D; Gregersen, K; Gregor, I M; Grenier, P; Griffiths, J; Grillo, A A; Grimm, K; Grinstein, S; Gris, Ph; Grivaz, J-F; Grohs, J P; Grohsjean, A; Gross, E; Grosse-Knetter, J; Grossi, G C; Grout, Z J; Guan, L; Guenther, J; Guescini, F; Guest, D; Gueta, O; Guido, E; Guillemin, T; Guindon, S; Gul, U; Gumpert, C; Guo, J; Guo, Y; Gupta, S; Gustavino, G; Gutierrez, P; Ortiz, N G Gutierrez; Gutschow, C; Guyot, C; Gwenlan, C; Gwilliam, C B; Haas, A; Haber, C; Hadavand, H K; Haddad, N; Haefner, P; Hageböck, S; Hajduk, Z; Hakobyan, H; Haleem, M; Haley, J; Hall, D; Halladjian, G; Hallewell, G D; Hamacher, K; Hamal, P; Hamano, K; Hamer, M; Hamilton, A; Hamity, G N; Hamnett, P G; Han, L; Hanagaki, K; Hanawa, K; Hance, M; Hanke, P; Hanna, R; Hansen, J B; Hansen, J D; Hansen, M C; Hansen, P H; Hara, K; Hard, A S; Harenberg, T; Hariri, F; Harkusha, S; Harrington, R D; Harrison, P F; Hartjes, F; Hasegawa, M; Hasegawa, S; Hasegawa, Y; Hasib, A; Hassani, S; Haug, S; Hauser, R; Hauswald, L; Havranek, M; Hawkes, C M; Hawkings, R J; Hawkins, A D; Hayashi, T; Hayden, D; Hays, C P; Hays, J M; Hayward, H S; Haywood, S J; Head, S J; Heck, T; Hedberg, V; Heelan, L; Heim, S; Heim, T; Heinemann, B; Heinrich, L; Hejbal, J; Helary, L; Hellman, S; Hellmich, D; Helsens, C; Henderson, J; Henderson, R C W; Heng, Y; Hengler, C; Henkelmann, S; Henrichs, A; Correia, A M Henriques; Henrot-Versille, S; Herbert, G H; Jiménez, Y Hernández; Herrberg-Schubert, R; Herten, G; Hertenberger, R; Hervas, L; Hesketh, G G; Hessey, N P; Hetherly, J W; Hickling, R; Higón-Rodriguez, E; Hill, E; Hill, J C; Hiller, K H; Hillier, S J; Hinchliffe, I; Hines, E; Hinman, R R; Hirose, M; Hirschbuehl, D; Hobbs, J; Hod, N; Hodgkinson, M C; Hodgson, P; Hoecker, A; Hoeferkamp, M R; Hoenig, F; Hohlfeld, M; Hohn, D; Holmes, T R; Homann, M; Hong, T M; van Huysduynen, L Hooft; Hopkins, W H; Horii, Y; Horton, A J; Hostachy, J-Y; Hou, S; Hoummada, A; Howard, J; Howarth, J; Hrabovsky, M; Hristova, I; Hrivnac, J; Hryn'ova, T; Hrynevich, A; Hsu, C; Hsu, P J; Hsu, S-C; Hu, D; Hu, Q; Hu, X; Huang, Y; Hubacek, Z; Hubaut, F; Huegging, F; Huffman, T B; Hughes, E W; Hughes, G; Huhtinen, M; Hülsing, T A; Huseynov, N; Huston, J; Huth, J; Iacobucci, G; Iakovidis, G; Ibragimov, I; Iconomidou-Fayard, L; Ideal, E; Idrissi, Z; Iengo, P; Igonkina, O; Iizawa, T; Ikegami, Y; Ikematsu, K; Ikeno, M; Ilchenko, Y; Iliadis, D; Ilic, N; Ince, T; Introzzi, G; Ioannou, P; Iodice, M; Iordanidou, K; Ippolito, V; Quiles, A Irles; Isaksson, C; Ishino, M; Ishitsuka, M; Ishmukhametov, R; Issever, C; Istin, S; Ponce, J M Iturbe; Iuppa, R; Ivarsson, J; Iwanski, W; Iwasaki, H; Izen, J M; Izzo, V; Jabbar, S; Jackson, B; Jackson, M; Jackson, P; Jaekel, M R; Jain, V; Jakobs, K; Jakobsen, S; Jakoubek, T; Jakubek, J; Jamin, D O; Jana, D K; Jansen, E; Jansky, R; Janssen, J; Janus, M; Jarlskog, G; Javadov, N; Javůrek, T; Jeanty, L; Jejelava, J; Jeng, G-Y; Jennens, D; Jenni, P; Jentzsch, J; Jeske, C; Jézéquel, S; Ji, H; Jia, J; Jiang, Y; Jiggins, S; Pena, J Jimenez; Jin, S; Jinaru, A; Jinnouchi, O; Joergensen, M D; Johansson, P; Johns, K A; Jon-And, K; Jones, G; Jones, R W L; Jones, T J; Jongmanns, J; Jorge, P M; Joshi, K D; Jovicevic, J; Ju, X; Jung, C A; Jussel, P; Rozas, A Juste; Kaci, M; Kaczmarska, A; Kado, M; Kagan, H; Kagan, M; Kahn, S J; Kajomovitz, E; Kalderon, C W; Kama, S; Kamenshchikov, A; Kanaya, N; Kaneti, S; Kantserov, V A; Kanzaki, J; Kaplan, B; Kaplan, L S; Kapliy, A; Kar, D; Karakostas, K; Karamaoun, A; Karastathis, N; Kareem, M J; Karentzos, E; Karnevskiy, M; Karpov, S N; Karpova, Z M; Karthik, K; Kartvelishvili, V; Karyukhin, A N; Kashif, L; Kass, R D; Kastanas, A; Kataoka, Y; Kato, C; Katre, A; Katzy, J; Kawagoe, K; Kawamoto, T; Kawamura, G; Kazama, S; Kazanin, V F; Keeler, R; Kehoe, R; Keller, J S; Kempster, J J; Keoshkerian, H; Kepka, O; Kerševan, B P; Kersten, S; Keyes, R A; Khalil-Zada, F; Khandanyan, H; Khanov, A; Kharlamov, A G; Khoo, T J; Khovanskiy, V; Khramov, E; Khubua, J; Kim, H Y; Kim, H; Kim, S H; Kim, Y K; Kimura, N; Kind, O M; King, B T; King, M; King, S B; Kirk, J; Kiryunin, A E; Kishimoto, T; Kisielewska, D; Kiss, F; Kiuchi, K; Kivernyk, O; Kladiva, E; Klein, M H; Klein, M; Klein, U; Kleinknecht, K; Klimek, P; Klimentov, A; Klingenberg, R; Klinger, J A; Klioutchnikova, T; Kluge, E-E; Kluit, P; Kluth, S; Knapik, J; Kneringer, E; Knoops, E B F G; Knue, A; Kobayashi, A; Kobayashi, D; Kobayashi, T; Kobel, M; Kocian, M; Kodys, P; Koffas, T; Koffeman, E; Kogan, L A; Kohlmann, S; Kohout, Z; Kohriki, T; Koi, T; Kolanoski, H; Koletsou, I; Komar, A A; Komori, Y; Kondo, T; Kondrashova, N; Köneke, K; König, A C; Kono, T; Konoplich, R; Konstantinidis, N; Kopeliansky, R; Koperny, S; Köpke, L; Kopp, A K; Korcyl, K; Kordas, K; Korn, A; Korol, A A; Korolkov, I; Korolkova, E V; Kortner, O; Kortner, S; Kosek, T; Kostyukhin, V V; Kotov, V M; Kotwal, A; Kourkoumeli-Charalampidi, A; Kourkoumelis, C; Kouskoura, V; Koutsman, A; Kowalewski, R; Kowalski, T Z; Kozanecki, W; Kozhin, A S; Kramarenko, V A; Kramberger, G; Krasnopevtsev, D; Krasny, M W; Krasznahorkay, A; Kraus, J K; Kravchenko, A; Kreiss, S; Kretz, M; Kretzschmar, J; Kreutzfeldt, K; Krieger, P; Krizka, K; Kroeninger, K; Kroha, H; Kroll, J; Kroseberg, J; Krstic, J; Kruchonak, U; Krüger, H; Krumnack, N; Kruse, A; Kruse, M C; Kruskal, M; Kubota, T; Kucuk, H; Kuday, S; Kuehn, S; Kugel, A; Kuger, F; Kuhl, A; Kuhl, T; Kukhtin, V; Kulchitsky, Y; Kuleshov, S; Kuna, M; Kunigo, T; Kupco, A; Kurashige, H; Kurochkin, Y A; Kus, V; Kuwertz, E S; Kuze, M; Kvita, J; Kwan, T; Kyriazopoulos, D; Rosa, A La; Navarro, J L La Rosa; Rotonda, L La; Lacasta, C; Lacava, F; Lacey, J; Lacker, H; Lacour, D; Lacuesta, V R; Ladygin, E; Lafaye, R; Laforge, B; Lagouri, T; Lai, S; Lambourne, L; Lammers, S; Lampen, C L; Lampl, W; Lançon, E; Landgraf, U; Landon, M P J; Lang, V S; Lange, J C; Lankford, A J; Lanni, F; Lantzsch, K; Lanza, A; Laplace, S; Lapoire, C; Laporte, J F; Lari, T; Manghi, F Lasagni; Lassnig, M; Laurelli, P; Lavrijsen, W; Law, A T; Laycock, P; Lazovich, T; Dortz, O Le; Guirriec, E Le; Menedeu, E Le; LeBlanc, M; LeCompte, T; Ledroit-Guillon, F; Lee, C A; Lee, S C; Lee, L; Lefebvre, G; Lefebvre, M; Legger, F; Leggett, C; Lehan, A; Miotto, G Lehmann; Lei, X; Leight, W A; Leisos, A; Leister, A G; Leite, M A L; Leitner, R; Lellouch, D; Lemmer, B; Leney, K J C; Lenz, T; Lenzi, B; Leone, R; Leone, S; Leonidopoulos, C; Leontsinis, S; Leroy, C; Lester, C G; Levchenko, M; Levêque, J; Levin, D; Levinson, L J; Levy, M; Lewis, A; Leyko, A M; Leyton, M; Li, B; Li, H; Li, H L; Li, L; Li, L; Li, S; Li, Y; Liang, Z; Liao, H; Liberti, B; Liblong, A; Lichard, P; Lie, K; Liebal, J; Liebig, W; Limbach, C; Limosani, A; Lin, S C; Lin, T H; Linde, F; Lindquist, B E; Linnemann, J T; Lipeles, E; Lipniacka, A; Lisovyi, M; Liss, T M; Lissauer, D; Lister, A; Litke, A M; Liu, B; Liu, D; Liu, H; Liu, J; Liu, J B; Liu, K; Liu, L; Liu, M; Liu, M; Liu, Y; Livan, M; Lleres, A; Merino, J Llorente; Lloyd, S L; Sterzo, F Lo; Lobodzinska, E; Loch, P; Lockman, W S; Loebinger, F K; Loevschall-Jensen, A E; Loginov, A; Lohse, T; Lohwasser, K; Lokajicek, M; Long, B A; Long, J D; Long, R E; Looper, K A; Lopes, L; Mateos, D Lopez; Paredes, B Lopez; Paz, I Lopez; Lorenz, J; Martinez, N Lorenzo; Losada, M; Loscutoff, P; Lösel, P J; Lou, X; Lounis, A; Love, J; Love, P A; Lu, N; Lubatti, H J; Luci, C; Lucotte, A; Luehring, F; Lukas, W; Luminari, L; Lundberg, O; Lund-Jensen, B; Lynn, D; Lysak, R; Lytken, E; Ma, H; Ma, L L; Maccarrone, G; Macchiolo, A; Macdonald, C M; Miguens, J Machado; Macina, D; Madaffari, D; Madar, R; Maddocks, H J; Mader, W F; Madsen, A; Maeland, S; Maeno, T; Maevskiy, A; Magradze, E; Mahboubi, K; Mahlstedt, J; Maiani, C; Maidantchik, C; Maier, A A; Maier, T; Maio, A; Majewski, S; Makida, Y; Makovec, N; Malaescu, B; Malecki, Pa; Maleev, V P; Malek, F; Mallik, U; Malon, D; Malone, C; Maltezos, S; Malyshev, V M; Malyukov, S; Mamuzic, J; Mancini, G; Mandelli, B; Mandelli, L; Mandić, I; Mandrysch, R; Maneira, J; Manfredini, A; de Andrade Filho, L Manhaes; Ramos, J Manjarres; Mann, A; Manning, P M; Manousakis-Katsikakis, A; Mansoulie, B; Mantifel, R; Mantoani, M; Mapelli, L; March, L; Marchiori, G; Marcisovsky, M; Marino, C P; Marjanovic, M; Marley, D E; Marroquim, F; Marsden, S P; Marshall, Z; Marti, L F; Marti-Garcia, S; Martin, B; Martin, T A; Martin, V J; Latour, B Martin Dit; Martinez, M; Martin-Haugh, S; Martoiu, V S; Martyniuk, A C; Marx, M; Marzano, F; Marzin, A; Masetti, L; Mashimo, T; Mashinistov, R; Masik, J; Maslennikov, A L; Massa, I; Massa, L; Massol, N; Mastrandrea, P; Mastroberardino, A; Masubuchi, T; Mättig, P; Mattmann, J; Maurer, J; Maxfield, S J; Maximov, D A; Mazini, R; Mazza, S M; Mazzaferro, L; Goldrick, G Mc; Kee, S P Mc; McCarn, A; McCarthy, R L; McCarthy, T G; McCubbin, N A; McFarlane, K W; Mcfayden, J A; Mchedlidze, G; McMahon, S J; McPherson, R A; Medinnis, M; Meehan, S; Mehlhase, S; Mehta, A; Meier, K; Meineck, C; Meirose, B; Garcia, B R Mellado; Meloni, F; Mengarelli, A; Menke, S; Meoni, E; Mercurio, K M; Mergelmeyer, S; Mermod, P; Merola, L; Meroni, C; Merritt, F S; Messina, A; Metcalfe, J; Mete, A S; Meyer, C; Meyer, C; Meyer, J-P; Meyer, J; Middleton, R P; Miglioranzi, S; Mijović, L; Mikenberg, G; Mikestikova, M; Mikuž, M; Milesi, M; Milic, A; Miller, D W; Mills, C; Milov, A; Milstead, D A; Minaenko, A A; Minami, Y; Minashvili, I A; Mincer, A I; Mindur, B; Mineev, M; Ming, Y; Mir, L M; Mitani, T; Mitrevski, J; Mitsou, V A; Miucci, A; Miyagawa, P S; Mjörnmark, J U; Moa, T; Mochizuki, K; Mohapatra, S; Mohr, W; Molander, S; Moles-Valls, R; Mönig, K; Monini, C; Monk, J; Monnier, E; Berlingen, J Montejo; Monticelli, F; Monzani, S; Moore, R W; Morange, N; Moreno, D; Llácer, M Moreno; Morettini, P; Morgenstern, M; Mori, D; Morii, M; Morinaga, M; Morisbak, V; Moritz, S; Morley, A K; Mornacchi, G; Morris, J D; Mortensen, S S; Morton, A; Morvaj, L; Mosidze, M; Moss, J; Motohashi, K; Mount, R; Mountricha, E; Mouraviev, S V; Moyse, E J W; Muanza, S; Mudd, R D; Mueller, F; Mueller, J; Mueller, R S P; Mueller, T; Muenstermann, D; Mullen, P; Mullier, G A; Quijada, J A Murillo; Murray, W J; Musheghyan, H; Musto, E; Myagkov, A G; Myska, M; Nachman, B P; Nackenhorst, O; Nadal, J; Nagai, K; Nagai, R; Nagai, Y; Nagano, K; Nagarkar, A; Nagasaka, Y; Nagata, K; Nagel, M; Nagy, E; Nairz, A M; Nakahama, Y; Nakamura, K; Nakamura, T; Nakano, I; Namasivayam, H; Garcia, R F Naranjo; Narayan, R; Naumann, T; Navarro, G; Nayyar, R; Neal, H A; Nechaeva, P Yu; Neep, T J; Nef, P D; Negri, A; Negrini, M; Nektarijevic, S; Nellist, C; Nelson, A; Nemecek, S; Nemethy, P; Nepomuceno, A A; Nessi, M; Neubauer, M S; Neumann, M; Neves, R M; Nevski, P; Newman, P R; Nguyen, D H; Nickerson, R B; Nicolaidou, R; Nicquevert, B; Nielsen, J; Nikiforou, N; Nikiforov, A; Nikolaenko, V; Nikolic-Audit, I; Nikolopoulos, K; Nilsen, J K; Nilsson, P; Ninomiya, Y; Nisati, A; Nisius, R; Nobe, T; Nomachi, M; Nomidis, I; Nooney, T; Norberg, S; Nordberg, M; Novgorodova, O; Nowak, S; Nozaki, M; Nozka, L; Ntekas, K; Hanninger, G Nunes; Nunnemann, T; Nurse, E; Nuti, F; O'Brien, B J; O'grady, F; O'Neil, D C; O'Shea, V; Oakham, F G; Oberlack, H; Obermann, T; Ocariz, J; Ochi, A; Ochoa, I; Ochoa-Ricoux, J P; Oda, S; Odaka, S; Ogren, H; Oh, A; Oh, S H; Ohm, C C; Ohman, H; Oide, H; Okamura, W; Okawa, H; Okumura, Y; Okuyama, T; Olariu, A; Pino, S A Olivares; Damazio, D Oliveira; Garcia, E Oliver; Olszewski, A; Olszowska, J; Onofre, A; Onyisi, P U E; Oram, C J; Oreglia, M J; Oren, Y; Orestano, D; Orlando, N; Barrera, C Oropeza; Orr, R S; Osculati, B; Ospanov, R; Garzon, G Otero Y; Otono, H; Ouchrif, M; Ouellette, E A; Ould-Saada, F; Ouraou, A; Oussoren, K P; Ouyang, Q; Ovcharova, A; Owen, M; Owen, R E; Ozcan, V E; Ozturk, N; Pachal, K; Pages, A Pacheco; Aranda, C Padilla; Pagáčová, M; Griso, S Pagan; Paganis, E; Paige, F; Pais, P; Pajchel, K; Palacino, G; Palestini, S; Palka, M; Pallin, D; Palma, A; Pan, Y B; Panagiotopoulou, E; Pandini, C E; Vazquez, J G Panduro; Pani, P; Panitkin, S; Pantea, D; Paolozzi, L; Papadopoulou, Th D; Papageorgiou, K; Paramonov, A; Hernandez, D Paredes; Parker, M A; Parker, K A; Parodi, F; Parsons, J A; Parzefall, U; Pasqualucci, E; Passaggio, S; Pastore, F; Pastore, Fr; Pásztor, G; Pataraia, S; Patel, N D; Pater, J R; Pauly, T; Pearce, J; Pearson, B; Pedersen, L E; Pedersen, M; Lopez, S Pedraza; Pedro, R; Peleganchuk, S V; Pelikan, D; Penc, O; Peng, C; Peng, H; Penning, B; Penwell, J; Perepelitsa, D V; Codina, E Perez; García-Estañ, M T Pérez; Perini, L; Pernegger, H; Perrella, S; Peschke, R; Peshekhonov, V D; Peters, K; Peters, R F Y; Petersen, B A; Petersen, T C; Petit, E; Petridis, A; Petridou, C; Petroff, P; Petrolo, E; Petrucci, F; Pettersson, N E; Pezoa, R; Phillips, P W; Piacquadio, G; Pianori, E; Picazio, A; Piccaro, E; Piccinini, M; Pickering, M A; Piegaia, R; Pignotti, D T; Pilcher, J E; Pilkington, A D; Pina, J; Pinamonti, M; Pinfold, J L; Pingel, A; Pinto, B; Pires, S; Pirumov, H; Pitt, M; Pizio, C; Plazak, L; Pleier, M-A; Pleskot, V; Plotnikova, E; Plucinski, P; Pluth, D; Poettgen, R; Poggioli, L; Pohl, D; Polesello, G; Poley, A; Policicchio, A; Polifka, R; Polini, A; Pollard, C S; Polychronakos, V; Pommès, K; Pontecorvo, L; Pope, B G; Popeneciu, G A; Popovic, D S; Poppleton, A; Pospisil, S; Potamianos, K; Potrap, I N; Potter, C J; Potter, C T; Poulard, G; Poveda, J; Pozdnyakov, V; Pralavorio, P; Pranko, A; Prasad, S; Prell, S; Price, D; Price, L E; Primavera, M; Prince, S; Proissl, M; Prokofiev, K; Prokoshin, F; Protopapadaki, E; Protopopescu, S; Proudfoot, J; Przybycien, M; Ptacek, E; Puddu, D; Pueschel, E; Puldon, D; Purohit, M; Puzo, P; Qian, J; Qin, G; Qin, Y; Quadt, A; Quarrie, D R; Quayle, W B; Queitsch-Maitland, M; Quilty, D; Raddum, S; Radeka, V; Radescu, V; Radhakrishnan, S K; Radloff, P; Rados, P; Ragusa, F; Rahal, G; Rajagopalan, S; Rammensee, M; Rangel-Smith, C; Rauscher, F; Rave, S; Ravenscroft, T; Raymond, M; Read, A L; Readioff, N P; Rebuzzi, D M; Redelbach, A; Redlinger, G; Reece, R; Reeves, K; Rehnisch, L; Reichert, J; Reisin, H; Relich, M; Rembser, C; Ren, H; Renaud, A; Rescigno, M; Resconi, S; Rezanova, O L; Reznicek, P; Rezvani, R; Richter, R; Richter, S; Richter-Was, E; Ricken, O; Ridel, M; Rieck, P; Riegel, C J; Rieger, J; Rijssenbeek, M; Rimoldi, A; Rinaldi, L; Ristić, B; Ritsch, E; Riu, I; Rizatdinova, F; Rizvi, E; Robertson, S H; Robichaud-Veronneau, A; Robinson, D; Robinson, J E M; Robson, A; Roda, C; Roe, S; Røhne, O; Rolli, S; Romaniouk, A; Romano, M; Saez, S M Romano; Adam, E Romero; Rompotis, N; Ronzani, M; Roos, L; Ros, E; Rosati, S; Rosbach, K; Rose, P; Rosendahl, P L; Rosenthal, O; Rossetti, V; Rossi, E; Rossi, L P; Rosten, R; Rotaru, M; Roth, I; Rothberg, J; Rousseau, D; Royon, C R; Rozanov, A; Rozen, Y; Ruan, X; Rubbo, F; Rubinskiy, I; Rud, V I; Rudolph, C; Rudolph, M S; Rühr, F; Ruiz-Martinez, A; Rurikova, Z; Rusakovich, N A; Ruschke, A; Russell, H L; Rutherfoord, J P; Ruthmann, N; Ryabov, Y F; Rybar, M; Rybkin, G; Ryder, N C; Saavedra, A F; Sabato, G; Sacerdoti, S; Saddique, A; Sadrozinski, H F-W; Sadykov, R; Tehrani, F Safai; Sahinsoy, M; Saimpert, M; Saito, T; Sakamoto, H; Sakurai, Y; Salamanna, G; Salamon, A; Saleem, M; Salek, D; De Bruin, P H Sales; Salihagic, D; Salnikov, A; Salt, J; Salvatore, D; Salvatore, F; Salvucci, A; Salzburger, A; Sammel, D; Sampsonidis, D; Sanchez, A; Sánchez, J; Martinez, V Sanchez; Sandaker, H; Sandbach, R L; Sander, H G; Sanders, M P; Sandhoff, M; Sandoval, C; Sandstroem, R; Sankey, D P C; Sannino, M; Sansoni, A; Santoni, C; Santonico, R; Santos, H; Castillo, I Santoyo; Sapp, K; Sapronov, A; Saraiva, J G; Sarrazin, B; Sasaki, O; Sasaki, Y; Sato, K; Sauvage, G; Sauvan, E; Savage, G; Savard, P; Sawyer, C; Sawyer, L; Saxon, J; Sbarra, C; Sbrizzi, A; Scanlon, T; Scannicchio, D A; Scarcella, M; Scarfone, V; Schaarschmidt, J; Schacht, P; Schaefer, D; Schaefer, R; Schaeffer, J; Schaepe, S; Schaetzel, S; Schäfer, U; Schaffer, A C; Schaile, D; Schamberger, R D; Scharf, V; Schegelsky, V A; Scheirich, D; Schernau, M; Schiavi, C; Schillo, C; Schioppa, M; Schlenker, S; Schmidt, E; Schmieden, K; Schmitt, C; Schmitt, S; Schmitt, S; Schneider, B; Schnellbach, Y J; Schnoor, U; Schoeffel, L; Schoening, A; Schoenrock, B D; Schopf, E; Schorlemmer, A L S; Schott, M; Schouten, D; Schovancova, J; Schramm, S; Schreyer, M; Schroeder, C; Schuh, N; Schultens, M J; Schultz-Coulon, H-C; Schulz, H; Schumacher, M; Schumm, B A; Schune, Ph; Schwanenberger, C; Schwartzman, A; Schwarz, T A; Schwegler, Ph; Schweiger, H; Schwemling, Ph; Schwienhorst, R; Schwindling, J; Schwindt, T; Sciacca, F G; Scifo, E; Sciolla, G; Scuri, F; Scutti, F; Searcy, J; Sedov, G; Sedykh, E; Seema, P; Seidel, S C; Seiden, A; Seifert, F; Seixas, J M; Sekhniaidze, G; Sekhon, K; Sekula, S J; Seliverstov, D M; Semprini-Cesari, N; Serfon, C; Serin, L; Serkin, L; Serre, T; Sessa, M; Seuster, R; Severini, H; Sfiligoj, T; Sforza, F; Sfyrla, A; Shabalina, E; Shamim, M; Shan, L Y; Shang, R; Shank, J T; Shapiro, M; Shatalov, P B; Shaw, K; Shaw, S M; Shcherbakova, A; Shehu, C Y; Sherwood, P; Shi, L; Shimizu, S; Shimmin, C O; Shimojima, M; Shiyakova, M; Shmeleva, A; Saadi, D Shoaleh; Shochet, M J; Shojaii, S; Shrestha, S; Shulga, E; Shupe, M A; Shushkevich, S; Sicho, P; Sidebo, P E; Sidiropoulou, O; Sidorov, D; Sidoti, A; Siegert, F; Sijacki, Dj; Silva, J; Silver, Y; Silverstein, S B; Simak, V; Simard, O; Simic, Lj; Simion, S; Simioni, E; Simmons, B; Simon, D; Simoniello, R; Sinervo, P; Sinev, N B; Sioli, M; Siragusa, G; Sisakyan, A N; Sivoklokov, S Yu; Sjölin, J; Sjursen, T B; Skinner, M B; Skottowe, H P; Skubic, P; Slater, M; Slavicek, T; Slawinska, M; Sliwa, K; Smakhtin, V; Smart, B H; Smestad, L; Smirnov, S Yu; Smirnov, Y; Smirnova, L N; Smirnova, O; Smith, M N K; Smith, R W; Smizanska, M; Smolek, K; Snesarev, A A; Snidero, G; Snyder, S; Sobie, R; Socher, F; Soffer, A; Soh, D A; Solans, C A; Solar, M; Solc, J; Soldatov, E Yu; Soldevila, U; Solodkov, A A; Soloshenko, A; Solovyanov, O V; Solovyev, V; Sommer, P; Song, H Y; Soni, N; Sood, A; Sopczak, A; Sopko, B; Sopko, V; Sorin, V; Sosa, D; Sosebee, M; Sotiropoulou, C L; Soualah, R; Soukharev, A M; South, D; Sowden, B C; Spagnolo, S; Spalla, M; Spanò, F; Spearman, W R; Sperlich, D; Spettel, F; Spighi, R; Spigo, G; Spiller, L A; Spousta, M; Spreitzer, T; St Denis, R D; Staerz, S; Stahlman, J; Stamen, R; Stamm, S; Stanecka, E; Stanescu, C; Stanescu-Bellu, M; Stanitzki, M M; Stapnes, S; Starchenko, E A; Stark, J; Staroba, P; Starovoitov, P; Staszewski, R; Stavina, P; Steinberg, P; Stelzer, B; Stelzer, H J; Stelzer-Chilton, O; Stenzel, H; Stewart, G A; Stillings, J A; Stockton, M C; Stoebe, M; Stoicea, G; Stolte, P; Stonjek, S; Stradling, A R; Straessner, A; Stramaglia, M E; Strandberg, J; Strandberg, S; Strandlie, A; Strauss, E; Strauss, M; Strizenec, P; Ströhmer, R; Strom, D M; Stroynowski, R; Strubig, A; Stucci, S A; Stugu, B; Styles, N A; Su, D; Su, J; Subramaniam, R; Succurro, A; Sugaya, Y; Suhr, C; Suk, M; Sulin, V V; Sultansoy, S; Sumida, T; Sun, S; Sun, X; Sundermann, J E; Suruliz, K; Susinno, G; Sutton, M R; Suzuki, S; Svatos, M; Swedish, S; Swiatlowski, M; Sykora, I; Sykora, T; Ta, D; Taccini, C; Tackmann, K; Taenzer, J; Taffard, A; Tafirout, R; Taiblum, N; Takai, H; Takashima, R; Takeda, H; Takeshita, T; Takubo, Y; Talby, M; Talyshev, A A; Tam, J Y C; Tan, K G; Tanaka, J; Tanaka, R; Tanaka, S; Tannenwald, B B; Tannoury, N; Tapprogge, S; Tarem, S; Tarrade, F; Tartarelli, G F; Tas, P; Tasevsky, M; Tashiro, T; Tassi, E; Delgado, A Tavares; Tayalati, Y; Taylor, F E; Taylor, G N; Taylor, W; Teischinger, F A; Castanheira, M Teixeira Dias; Teixeira-Dias, P; Temming, K K; Kate, H Ten; Teng, P K; Teoh, J J; Tepel, F; Terada, S; Terashi, K; Terron, J; Terzo, S; Testa, M; Teuscher, R J; Theveneaux-Pelzer, T; Thomas, J P; Thomas-Wilsker, J; Thompson, E N; Thompson, P D; Thompson, R J; Thompson, A S; Thomsen, L A; Thomson, E; Thomson, M; Thun, R P; Tibbetts, M J; Torres, R E Ticse; Tikhomirov, V O; Tikhonov, Yu A; Timoshenko, S; Tiouchichine, E; Tipton, P; Tisserant, S; Todome, K; Todorov, T; Todorova-Nova, S; Tojo, J; Tokár, S; Tokushuku, K; Tollefson, K; Tolley, E; Tomlinson, L; Tomoto, M; Tompkins, L; Toms, K; Torrence, E; Torres, H; Pastor, E Torró; Toth, J; Touchard, F; Tovey, D R; Trefzger, T; Tremblet, L; Tricoli, A; Trigger, I M; Trincaz-Duvoid, S; Tripiana, M F; Trischuk, W; Trocmé, B; Troncon, C; Trottier-McDonald, M; Trovatelli, M; True, P; Truong, L; Trzebinski, M; Trzupek, A; Tsarouchas, C; Tseng, J C-L; Tsiareshka, P V; Tsionou, D; Tsipolitis, G; Tsirintanis, N; Tsiskaridze, S; Tsiskaridze, V; Tskhadadze, E G; Tsukerman, I I; Tsulaia, V; Tsuno, S; Tsybychev, D; Tudorache, A; Tudorache, V; Tuna, A N; Tupputi, S A; Turchikhin, S; Turecek, D; Turra, R; Turvey, A J; Tuts, P M; Tykhonov, A; Tylmad, M; Tyndel, M; Ueda, I; Ueno, R; Ughetto, M; Ugland, M; Uhlenbrock, M; Ukegawa, F; Unal, G; Undrus, A; Unel, G; Ungaro, F C; Unno, Y; Unverdorben, C; Urban, J; Urquijo, P; Urrejola, P; Usai, G; Usanova, A; Vacavant, L; Vacek, V; Vachon, B; Valderanis, C; Valencic, N; Valentinetti, S; Valero, A; Valery, L; Valkar, S; Gallego, E Valladolid; Vallecorsa, S; Ferrer, J A Valls; Van Den Wollenberg, W; Van Der Deijl, P C; van der Geer, R; van der Graaf, H; Van Der Leeuw, R; van Eldik, N; van Gemmeren, P; Van Nieuwkoop, J; van Vulpen, I; van Woerden, M C; Vanadia, M; Vandelli, W; Vanguri, R; Vaniachine, A; Vannucci, F; Vardanyan, G; Vari, R; Varnes, E W; Varol, T; Varouchas, D; Vartapetian, A; Varvell, K E; Vassilakopoulos, V I; Vazeille, F; Schroeder, T Vazquez; Veatch, J; Veloce, L M; Veloso, F; Velz, T; Veneziano, S; Ventura, A; Ventura, D; Venturi, M; Venturi, N; Venturini, A; Vercesi, V; Verducci, M; Verkerke, W; Vermeulen, J C; Vest, A; Vetterli, M C; Viazlo, O; Vichou, I; Vickey, T; Boeriu, O E Vickey; Viehhauser, G H A; Viel, S; Vigne, R; Villa, M; Perez, M Villaplana; Vilucchi, E; Vincter, M G; Vinogradov, V B; Vivarelli, I; Vaque, F Vives; Vlachos, S; Vladoiu, D; Vlasak, M; Vogel, M; Vokac, P; Volpi, G; Volpi, M; von der Schmitt, H; von Radziewski, H; von Toerne, E; Vorobel, V; Vorobev, K; Vos, M; Voss, R; Vossebeld, J H; Vranjes, N; Milosavljevic, M Vranjes; Vrba, V; Vreeswijk, M; Vuillermet, R; Vukotic, I; Vykydal, Z; Wagner, P; Wagner, W; Wahlberg, H; Wahrmund, S; Wakabayashi, J; Walder, J; Walker, R; Walkowiak, W; Wang, C; Wang, F; Wang, H; Wang, H; Wang, J; Wang, J; Wang, K; Wang, R; Wang, S M; Wang, T; Wang, T; Wang, X; Wanotayaroj, C; Warburton, A; Ward, C P; Wardrope, D R; Warsinsky, M; Washbrook, A; Wasicki, C; Watkins, P M; Watson, A T; Watson, I J; Watson, M F; Watts, G; Watts, S; Waugh, B M; Webb, S; Weber, M S; Weber, S W; Webster, J S; Weidberg, A R; Weinert, B; Weingarten, J; Weiser, C; Weits, H; Wells, P S; Wenaus, T; Wengler, T; Wenig, S; Wermes, N; Werner, M; Werner, P; Wessels, M; Wetter, J; Whalen, K; Wharton, A M; White, A; White, M J; White, R; White, S; Whiteson, D; Wickens, F J; Wiedenmann, W; Wielers, M; Wienemann, P; Wiglesworth, C; Wiik-Fuchs, L A M; Wildauer, A; Wilkens, H G; Williams, H H; Williams, S; Willis, C; Willocq, S; Wilson, A; Wilson, J A; Wingerter-Seez, I; Winklmeier, F; Winter, B T; Wittgen, M; Wittkowski, J; Wollstadt, S J; Wolter, M W; Wolters, H; Wosiek, B K; Wotschack, J; Woudstra, M J; Wozniak, K W; Wu, M; Wu, M; Wu, S L; Wu, X; Wu, Y; Wyatt, T R; Wynne, B M; Xella, S; Xu, D; Xu, L; Yabsley, B; Yacoob, S; Yakabe, R; Yamada, M; Yamaguchi, Y; Yamamoto, A; Yamamoto, S; Yamanaka, T; Yamauchi, K; Yamazaki, Y; Yan, Z; Yang, H; Yang, H; Yang, Y; Yao, W-M; Yasu, Y; Yatsenko, E; Wong, K H Yau; Ye, J; Ye, S; Yeletskikh, I; Yen, A L; Yildirim, E; Yorita, K; Yoshida, R; Yoshihara, K; Young, C; Young, C J S; Youssef, S; Yu, D R; Yu, J; Yu, J M; Yu, J; Yuan, L; Yuen, S P Y; Yurkewicz, A; Yusuff, I; Zabinski, B; Zaidan, R; Zaitsev, A M; Zalieckas, J; Zaman, A; Zambito, S; Zanello, L; Zanzi, D; Zeitnitz, C; Zeman, M; Zemla, A; Zengel, K; Zenin, O; Ženiš, T; Zerwas, D; Zhang, D; Zhang, F; Zhang, H; Zhang, J; Zhang, L; Zhang, R; Zhang, X; Zhang, Z; Zhao, X; Zhao, Y; Zhao, Z; Zhemchugov, A; Zhong, J; Zhou, B; Zhou, C; Zhou, L; Zhou, L; Zhou, N; Zhu, C G; Zhu, H; Zhu, J; Zhu, Y; Zhuang, X; Zhukov, K; Zibell, A; Zieminska, D; Zimine, N I; Zimmermann, C; Zimmermann, S; Zinonos, Z; Zinser, M; Ziolkowski, M; Živković, L; Zobernig, G; Zoccoli, A; Nedden, M Zur; Zurzolo, G; Zwalinski, L

    2016-01-01

    The large rate of multiple simultaneous proton-proton interactions, or pile-up, generated by the Large Hadron Collider in Run 1 required the development of many new techniques to mitigate the adverse effects of these conditions. This paper describes the methods employed in the ATLAS experiment to correct for the impact of pile-up on jet energy and jet shapes, and for the presence of spurious additional jets, with a primary focus on the large 20.3 [Formula: see text] data sample collected at a centre-of-mass energy of [Formula: see text]. The energy correction techniques that incorporate sophisticated estimates of the average pile-up energy density and tracking information are presented. Jet-to-vertex association techniques are discussed and projections of performance for the future are considered. Lastly, the extension of these techniques to mitigate the effect of pile-up on jet shapes using subtraction and grooming procedures is presented.

  19. Quantitative measurement of thin phase objects: comparison of speckle deflectometry and defocus-variant lateral shear interferometry.

    PubMed

    Sjodahl, Mikael; Amer, Eynas

    2018-05-10

    The two techniques of lateral shear interferometry and speckle deflectometry are analyzed in a common optical system for their ability to measure phase gradient fields of a thin phase object. The optical system is designed to introduce a shear in the frequency domain of a telecentric imaging system that gives a sensitivity of both techniques in proportion to the defocus introduced. In this implementation, both techniques successfully measure the horizontal component of the phase gradient field. The response of both techniques scales linearly with the defocus distance, and the precision is comparative, with a random error in the order of a few rad/mm. It is further concluded that the precision of the two techniques relates to the transverse speckle size in opposite ways. While a large spatial coherence width, and correspondingly a large lateral speckle size, makes lateral shear interferometry less susceptible to defocus, a large lateral speckle size is detrimental for speckle correlation. The susceptibility for the magnitude of the defocus is larger for the lateral shear interferometry technique as compared to the speckle deflectometry technique. The two techniques provide the same type of information; however, there are a few fundamental differences. Lateral shear interferometry relies on a special hardware configuration in which the shear angle is intrinsically integrated into the system. The design of a system sensitive to both in-plane phase gradient components requires a more complex configuration and is not considered in this paper. Speckle deflectometry, on the other hand, requires no special hardware, and both components of the phase gradient field are given directly from the measured speckle deformation field.

  20. The application of artificial intelligence techniques to large distributed networks

    NASA Technical Reports Server (NTRS)

    Dubyah, R.; Smith, T. R.; Star, J. L.

    1985-01-01

    Data accessibility and transfer of information, including the land resources information system pilot, are structured as large computer information networks. These pilot efforts include the reduction of the difficulty to find and use data, reducing processing costs, and minimize incompatibility between data sources. Artificial Intelligence (AI) techniques were suggested to achieve these goals. The applicability of certain AI techniques are explored in the context of distributed problem solving systems and the pilot land data system (PLDS). The topics discussed include: PLDS and its data processing requirements, expert systems and PLDS, distributed problem solving systems, AI problem solving paradigms, query processing, and distributed data bases.

  1. A data compression technique for synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Minden, G. J.

    1986-01-01

    A data compression technique is developed for synthetic aperture radar (SAR) imagery. The technique is based on an SAR image model and is designed to preserve the local statistics in the image by an adaptive variable rate modification of block truncation coding (BTC). A data rate of approximately 1.6 bit/pixel is achieved with the technique while maintaining the image quality and cultural (pointlike) targets. The algorithm requires no large data storage and is computationally simple.

  2. Application of the shaped electrode technique to a large area rectangular capacitively coupled plasma reactor to suppress standing wave nonuniformity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sansonnens, L.; Schmidt, H.; Howling, A.A.

    The electromagnetic standing wave effect can become the main source of nonuniformity limiting the use of very high frequency in large area reactors exceeding 1 m{sup 2} required for industrial applications. Recently, it has been proposed and shown experimentally in a cylindrical reactor that a shaped electrode in place of the conventional flat electrode can be used in order to suppress the electromagnetic standing wave nonuniformity. In this study, we show experimental measurements demonstrating that the shaped electrode technique can also be applied in large area rectangular reactors. We also present results of electromagnetic screening by a conducting substrate whichmore » has important consequences for industrial application of the shaped electrode technique.« less

  3. Software Reliability Issues Concerning Large and Safety Critical Software Systems

    NASA Technical Reports Server (NTRS)

    Kamel, Khaled; Brown, Barbara

    1996-01-01

    This research was undertaken to provide NASA with a survey of state-of-the-art techniques using in industrial and academia to provide safe, reliable, and maintainable software to drive large systems. Such systems must match the complexity and strict safety requirements of NASA's shuttle system. In particular, the Launch Processing System (LPS) is being considered for replacement. The LPS is responsible for monitoring and commanding the shuttle during test, repair, and launch phases. NASA built this system in the 1970's using mostly hardware techniques to provide for increased reliability, but it did so often using custom-built equipment, which has not been able to keep up with current technologies. This report surveys the major techniques used in industry and academia to ensure reliability in large and critical computer systems.

  4. Using cognitive task analysis to develop simulation-based training for medical tasks.

    PubMed

    Cannon-Bowers, Jan; Bowers, Clint; Stout, Renee; Ricci, Katrina; Hildabrand, Annette

    2013-10-01

    Pressures to increase the efficacy and effectiveness of medical training are causing the Department of Defense to investigate the use of simulation technologies. This article describes a comprehensive cognitive task analysis technique that can be used to simultaneously generate training requirements, performance metrics, scenario requirements, and simulator/simulation requirements for medical tasks. On the basis of a variety of existing techniques, we developed a scenario-based approach that asks experts to perform the targeted task multiple times, with each pass probing a different dimension of the training development process. In contrast to many cognitive task analysis approaches, we argue that our technique can be highly cost effective because it is designed to accomplish multiple goals. The technique was pilot tested with expert instructors from a large military medical training command. These instructors were employed to generate requirements for two selected combat casualty care tasks-cricothyroidotomy and hemorrhage control. Results indicated that the technique is feasible to use and generates usable data to inform simulation-based training system design. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.

  5. Fourier transform infrared microspectroscopy for the analysis of the biochemical composition of C. elegans worms.

    PubMed

    Sheng, Ming; Gorzsás, András; Tuck, Simon

    2016-01-01

    Changes in intermediary metabolism have profound effects on many aspects of C. elegans biology including growth, development and behavior. However, many traditional biochemical techniques for analyzing chemical composition require relatively large amounts of starting material precluding the analysis of mutants that cannot be grown in large amounts as homozygotes. Here we describe a technique for detecting changes in the chemical compositions of C. elegans worms by Fourier transform infrared microspectroscopy. We demonstrate that the technique can be used to detect changes in the relative levels of carbohydrates, proteins and lipids in one and the same worm. We suggest that Fourier transform infrared microspectroscopy represents a useful addition to the arsenal of techniques for metabolic studies of C. elegans worms.

  6. Structural performance analysis and redesign

    NASA Technical Reports Server (NTRS)

    Whetstone, W. D.

    1978-01-01

    Program performs stress buckling and vibrational analysis of large, linear, finite-element systems in excess of 50,000 degrees of freedom. Cost, execution time, and storage requirements are kept reasonable through use of sparse matrix solution techniques, and other computational and data management procedures designed for problems of very large size.

  7. Using Technology to Facilitate Grading Consistency in Large Classes

    ERIC Educational Resources Information Center

    Cathcart, Abby; Neale, Larry

    2012-01-01

    University classes in marketing are often large and therefore require teams of teachers to cover all of the necessary activities. A major problem with teaching teams is the inconsistency that results from myriad individuals offering subjective opinions (Preston 1997). This innovation uses the latest moderation techniques along with Audience…

  8. Low frequency radio synthesis imaging of the galactic center region

    NASA Astrophysics Data System (ADS)

    Nord, Michael Evans

    2005-11-01

    The Very Large Array radio interferometer has been equipped with new receivers to allow observations at 330 and 74 MHz, frequencies much lower than were previously possible with this instrument. Though the VLA dishes are not optimal for working at these frequencies, the system is successful and regular observations are now taken at these frequencies. However, new data analysis techniques are required to work at these frequencies. The technique of self- calibration, used to remove small atmospheric effects at higher frequencies, has been adapted to compensate for ionospheric turbulence in much the same way that adaptive optics is used in the optical regime. Faceted imaging techniques are required to compensate for the noncoplanar image distortion that affects the system due to the wide fields of view at these frequencies (~2.3° at 330 MHz and ~11° at 74 MHz). Furthermore, radio frequency interference is a much larger problem at these frequencies than in higher frequencies and novel approaches to its mitigation are required. These new techniques and new system are allowing for imaging of the radio sky at sensitivities and resolutions orders of magnitude higher than were possible with the low frequency systems of decades past. In this work I discuss the advancements in low frequency data techniques required to make high resolution, high sensitivity, large field of view measurements with the new Very Large Array low frequency system and then detail the results of turning this new system and techniques on the center of our Milky Way Galaxy. At 330 MHz I image the Galactic center region with roughly 10 inches resolution and 1.6 mJy beam -1 sensitivity. New Galactic center nonthermal filaments, new pulsar candidates, and the lowest frequency detection to date of the radio source associated with our Galaxy's central massive black hole result. At 74 MHz I image a region of the sky roughly 40° x 6° with, ~10 feet resolution. I use the high opacity of H II regions at 74 MHz to extract three-dimensional data on the distribution of Galactic cosmic ray emissivity, a measurement possible only at low radio frequencies.

  9. Inquiry-Based Experiments for Large-Scale Introduction to PCR and Restriction Enzyme Digests

    ERIC Educational Resources Information Center

    Johanson, Kelly E.; Watt, Terry J.

    2015-01-01

    Polymerase chain reaction and restriction endonuclease digest are important techniques that should be included in all Biochemistry and Molecular Biology laboratory curriculums. These techniques are frequently taught at an advanced level, requiring many hours of student and faculty time. Here we present two inquiry-based experiments that are…

  10. For Mole Problems, Call Avogadro: 602-1023.

    ERIC Educational Resources Information Center

    Uthe, R. E.

    2002-01-01

    Describes techniques to help introductory students become familiar with Avogadro's number and mole calculations. Techniques involve estimating numbers of common objects then calculating the length of time needed to count large numbers of them. For example, the immense amount of time required to count a mole of sand grains at one grain per second…

  11. Videometric Applications in Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Radeztsky, R. H.; Liu, Tian-Shu

    1997-01-01

    Videometric measurements in wind tunnels can be very challenging due to the limited optical access, model dynamics, optical path variability during testing, large range of temperature and pressure, hostile environment, and the requirements for high productivity and large amounts of data on a daily basis. Other complications for wind tunnel testing include the model support mechanism and stringent surface finish requirements for the models in order to maintain aerodynamic fidelity. For these reasons nontraditional photogrammetric techniques and procedures sometimes must be employed. In this paper several such applications are discussed for wind tunnels which include test conditions with Mach number from low speed to hypersonic, pressures from less than an atmosphere to nearly seven atmospheres, and temperatures from cryogenic to above room temperature. Several of the wind tunnel facilities are continuous flow while one is a short duration blowdown facility. Videometric techniques and calibration procedures developed to measure angle of attack, the change in wing twist and bending induced by aerodynamic load, and the effects of varying model injection rates are described. Some advantages and disadvantages of these techniques are given and comparisons are made with non-optical and more traditional video photogrammetric techniques.

  12. Large-area metallic photonic lattices for military applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luk, Ting Shan

    2007-11-01

    In this project we developed photonic crystal modeling capability and fabrication technology that is scaleable to large area. An intelligent optimization code was developed to find the optimal structure for the desired spectral response. In terms of fabrication, an exhaustive survey of fabrication techniques that would meet the large area requirement was reduced to Deep X-ray Lithography (DXRL) and nano-imprint. Using DXRL, we fabricated a gold logpile photonic crystal in the <100> plane. For the nano-imprint technique, we fabricated a cubic array of gold squares. These two examples also represent two classes of metallic photonic crystal topologies, the connected networkmore » and cermet arrangement.« less

  13. Advanced Hypervelocity Aerophysics Facility Workshop

    NASA Technical Reports Server (NTRS)

    Witcofski, Robert D. (Compiler); Scallion, William I. (Compiler)

    1989-01-01

    The primary objective of the workshop was to obtain a critical assessment of a concept for a large, advanced hypervelocity ballistic range test facility powered by an electromagnetic launcher, which was proposed by the Langley Research Center. It was concluded that the subject large-scale facility was feasible and would provide the required ground-based capability for performing tests at entry flight conditions (velocity and density) on large, complex, instrumented models. It was also concluded that advances in remote measurement techniques and particularly onboard model instrumentation, light-weight model construction techniques, and model electromagnetic launcher (EML) systems must be made before any commitment for the construction of such a facility can be made.

  14. Use of high-order spectral moments in Doppler weather radar

    NASA Astrophysics Data System (ADS)

    di Vito, A.; Galati, G.; Veredice, A.

    Three techniques to estimate the skewness and curtosis of measured precipitation spectra are evaluated. These are: (1) an extension of the pulse-pair technique, (2) fitting the autocorrelation function with a least square polynomial and differentiating it, and (3) the autoregressive spectral estimation. The third technique provides the best results but has an exceedingly large computation burden. The first technique does not supply any useful results due to the crude approximation of the derivatives of the ACF. The second technique requires further study to reduce its variance.

  15. 4D Hyperspherical Harmonic (HyperSPHARM) Representation of Multiple Disconnected Brain Subcortical Structures

    PubMed Central

    Hosseinbor, A. Pasha; Chung, Moo K.; Schaefer, Stacey M.; van Reekum, Carien M.; Peschke-Schmitz, Lara; Sutterer, Matt; Alexander, Andrew L.; Davidson, Richard J.

    2014-01-01

    We present a novel surface parameterization technique using hyperspherical harmonics (HSH) in representing compact, multiple, disconnected brain subcortical structures as a single analytic function. The proposed hyperspherical harmonic representation (HyperSPHARM) has many advantages over the widely used spherical harmonic (SPHARM) parameterization technique. SPHARM requires flattening 3D surfaces to 3D sphere which can be time consuming for large surface meshes, and can’t represent multiple disconnected objects with single parameterization. On the other hand, HyperSPHARM treats 3D object, via simple stereographic projection, as a surface of 4D hypersphere with extremely large radius, hence avoiding the computationally demanding flattening process. HyperSPHARM is shown to achieve a better reconstruction with only 5 basis compared to SPHARM that requires more than 441. PMID:24505716

  16. Compression technique for large statistical data bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eggers, S.J.; Olken, F.; Shoshani, A.

    1981-03-01

    The compression of large statistical databases is explored and are proposed for organizing the compressed data, such that the time required to access the data is logarithmic. The techniques exploit special characteristics of statistical databases, namely, variation in the space required for the natural encoding of integer attributes, a prevalence of a few repeating values or constants, and the clustering of both data of the same length and constants in long, separate series. The techniques are variations of run-length encoding, in which modified run-lengths for the series are extracted from the data stream and stored in a header, which ismore » used to form the base level of a B-tree index into the database. The run-lengths are cumulative, and therefore the access time of the data is logarithmic in the size of the header. The details of the compression scheme and its implementation are discussed, several special cases are presented, and an analysis is given of the relative performance of the various versions.« less

  17. Large Composite Structures Processing Technologies for Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Clinton, R. G., Jr.; Vickers, J. H.; McMahon, W. M.; Hulcher, A. B.; Johnston, N. J.; Cano, R. J.; Belvin, H. L.; McIver, K.; Franklin, W.; Sidwell, D.

    2001-01-01

    Significant efforts have been devoted to establishing the technology foundation to enable the progression to large scale composite structures fabrication. We are not capable today of fabricating many of the composite structures envisioned for the second generation reusable launch vehicle (RLV). Conventional 'aerospace' manufacturing and processing methodologies (fiber placement, autoclave, tooling) will require substantial investment and lead time to scale-up. Out-of-autoclave process techniques will require aggressive efforts to mature the selected technologies and to scale up. Focused composite processing technology development and demonstration programs utilizing the building block approach are required to enable envisioned second generation RLV large composite structures applications. Government/industry partnerships have demonstrated success in this area and represent best combination of skills and capabilities to achieve this goal.

  18. Improved technique that allows the performance of large-scale SNP genotyping on DNA immobilized by FTA technology.

    PubMed

    He, Hongbin; Argiro, Laurent; Dessein, Helia; Chevillard, Christophe

    2007-01-01

    FTA technology is a novel method designed to simplify the collection, shipment, archiving and purification of nucleic acids from a wide variety of biological sources. The number of punches that can normally be obtained from a single specimen card are often however, insufficient for the testing of the large numbers of loci required to identify genetic factors that control human susceptibility or resistance to multifactorial diseases. In this study, we propose an improved technique to perform large-scale SNP genotyping. We applied a whole genome amplification method to amplify DNA from buccal cell samples stabilized using FTA technology. The results show that using the improved technique it is possible to perform up to 15,000 genotypes from one buccal cell sample. Furthermore, the procedure is simple. We consider this improved technique to be a promising methods for performing large-scale SNP genotyping because the FTA technology simplifies the collection, shipment, archiving and purification of DNA, while whole genome amplification of FTA card bound DNA produces sufficient material for the determination of thousands of SNP genotypes.

  19. UMEL: a new regression tool to identify measurement peaks in LIDAR/DIAL systems for environmental physics applications.

    PubMed

    Gelfusa, M; Gaudio, P; Malizia, A; Murari, A; Vega, J; Richetta, M; Gonzalez, S

    2014-06-01

    Recently, surveying large areas in an automatic way, for early detection of both harmful chemical agents and forest fires, has become a strategic objective of defence and public health organisations. The Lidar and Dial techniques are widely recognized as a cost-effective alternative to monitor large portions of the atmosphere. To maximize the effectiveness of the measurements and to guarantee reliable monitoring of large areas, new data analysis techniques are required. In this paper, an original tool, the Universal Multi Event Locator, is applied to the problem of automatically identifying the time location of peaks in Lidar and Dial measurements for environmental physics applications. This analysis technique improves various aspects of the measurements, ranging from the resilience to drift in the laser sources to the increase of the system sensitivity. The method is also fully general, purely software, and can therefore be applied to a large variety of problems without any additional cost. The potential of the proposed technique is exemplified with the help of data of various instruments acquired during several experimental campaigns in the field.

  20. Scalable Manufacturing of Plasmonic Nanodisk Dimers and Cusp Nanostructures using Salting-out Quenching Method and Colloidal Lithography

    PubMed Central

    Juluri, Bala Krishna; Chaturvedi, Neetu; Hao, Qingzhen; Lu, Mengqian; Velegol, Darrell; Jensen, Lasse; Huang, Tony Jun

    2014-01-01

    Localization of large electric fields in plasmonic nanostructures enables various processes such as single molecule detection, higher harmonic light generation, and control of molecular fluorescence and absorption. High-throughput, simple nanofabrication techniques are essential for implementing plasmonic nanostructures with large electric fields for practical applications. In this article we demonstrate a scalable, rapid, and inexpensive fabrication method based on the salting-out quenching technique and colloidal lithography for the fabrication of two types of nanostructures with large electric field: nanodisk dimers and cusp nanostructures. Our technique relies on fabricating polystyrene doublets from single beads by controlled aggregation and later using them as soft masks to fabricate metal nanodisk dimers and nanocusp structures. Both of these structures have a well-defined geometry for the localization of large electric fields comparable to structures fabricated by conventional nanofabrication techniques. We also show that various parameters in the fabrication process can be adjusted to tune the geometry of the final structures and control their plasmonic properties. With advantages in throughput, cost, and geometric tunability, our fabrication method can be valuable in many applications that require plasmonic nanostructures with large electric fields. PMID:21692473

  1. Pseudo-point transport technique: a new method for solving the Boltzmann transport equation in media with highly fluctuating cross sections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakhai, B.

    A new method for solving radiation transport problems is presented. The heart of the technique is a new cross section processing procedure for the calculation of group-to-point and point-to-group cross sections sets. The method is ideally suited for problems which involve media with highly fluctuating cross sections, where the results of the traditional multigroup calculations are beclouded by the group averaging procedures employed. Extensive computational efforts, which would be required to evaluate double integrals in the multigroup treatment numerically, prohibit iteration to optimize the energy boundaries. On the other hand, use of point-to-point techniques (as in the stochastic technique) ismore » often prohibitively expensive due to the large computer storage requirement. The pseudo-point code is a hybrid of the two aforementioned methods (group-to-group and point-to-point) - hence the name pseudo-point - that reduces the computational efforts of the former and the large core requirements of the latter. The pseudo-point code generates the group-to-point or the point-to-group transfer matrices, and can be coupled with the existing transport codes to calculate pointwise energy-dependent fluxes. This approach yields much more detail than is available from the conventional energy-group treatments. Due to the speed of this code, several iterations could be performed (in affordable computing efforts) to optimize the energy boundaries and the weighting functions. The pseudo-point technique is demonstrated by solving six problems, each depicting a certain aspect of the technique. The results are presented as flux vs energy at various spatial intervals. The sensitivity of the technique to the energy grid and the savings in computational effort are clearly demonstrated.« less

  2. Improving IT Portfolio Management Decision Confidence Using Multi-Criteria Decision Making and Hypervariate Display Techniques

    ERIC Educational Resources Information Center

    Landmesser, John Andrew

    2014-01-01

    Information technology (IT) investment decision makers are required to process large volumes of complex data. An existing body of knowledge relevant to IT portfolio management (PfM), decision analysis, visual comprehension of large volumes of information, and IT investment decision making suggest Multi-Criteria Decision Making (MCDM) and…

  3. Enabling PBPK model development through the application of freely available techniques for the creation of a chemically-annotatedcollection of literature

    EPA Science Inventory

    The creation of Physiologically Based Pharmacokinetic (PBPK) models for a new chemical requires the selection of an appropriate model structure and the collection of a large amount of data for parameterization. Commonly, a large proportion of the needed information is collected ...

  4. User Oriented Techniques to Support Interaction and Decision Making with Large Educational Databases

    ERIC Educational Resources Information Center

    Hartley, Roger; Almuhaidib, Saud M. Y.

    2007-01-01

    Information Technology is developing rapidly and providing policy/decision makers with large amounts of information that require processing and analysis. Decision support systems (DSS) aim to provide tools that not only help such analyses, but enable the decision maker to experiment and simulate the effects of different policies and selection…

  5. Watermarking techniques for electronic delivery of remote sensing images

    NASA Astrophysics Data System (ADS)

    Barni, Mauro; Bartolini, Franco; Magli, Enrico; Olmo, Gabriella

    2002-09-01

    Earth observation missions have recently attracted a growing interest, mainly due to the large number of possible applications capable of exploiting remotely sensed data and images. Along with the increase of market potential, the need arises for the protection of the image products. Such a need is a very crucial one, because the Internet and other public/private networks have become preferred means of data exchange. A critical issue arising when dealing with digital image distribution is copyright protection. Such a problem has been largely addressed by resorting to watermarking technology. A question that obviously arises is whether the requirements imposed by remote sensing imagery are compatible with existing watermarking techniques. On the basis of these motivations, the contribution of this work is twofold: assessment of the requirements imposed by remote sensing applications on watermark-based copyright protection, and modification of two well-established digital watermarking techniques to meet such constraints. More specifically, the concept of near-lossless watermarking is introduced and two possible algorithms matching such a requirement are presented. Experimental results are shown to measure the impact of watermark introduction on a typical remote sensing application, i.e., unsupervised image classification.

  6. Copyright protection of remote sensing imagery by means of digital watermarking

    NASA Astrophysics Data System (ADS)

    Barni, Mauro; Bartolini, Franco; Cappellini, Vito; Magli, Enrico; Olmo, Gabriella; Zanini, R.

    2001-12-01

    The demand for remote sensing data has increased dramatically mainly due to the large number of possible applications capable to exploit remotely sensed data and images. As in many other fields, along with the increase of market potential and product diffusion, the need arises for some sort of protection of the image products from unauthorized use. Such a need is a very crucial one even because the Internet and other public/private networks have become preferred and effective means of data exchange. An important issue arising when dealing with digital image distribution is copyright protection. Such a problem has been largely addressed by resorting to watermarking technology. Before applying watermarking techniques developed for multimedia applications to remote sensing applications, it is important that the requirements imposed by remote sensing imagery are carefully analyzed to investigate whether they are compatible with existing watermarking techniques. On the basis of these motivations, the contribution of this work is twofold: (1) assessment of the requirements imposed by the characteristics of remotely sensed images on watermark-based copyright protection; (2) discussion of a case study where the performance of two popular, state-of-the-art watermarking techniques are evaluated by the light of the requirements at the previous point.

  7. Insulating board, hardboard, and other structural fiberboards

    Treesearch

    W. C. Lewis; S. L. Schwartz

    1965-01-01

    The wood-base fiber panel materials are a part of the rapidly evolving technology based on converting lignocellulose to fiber and reconstituting the fiber into large sheets and panels. While some equipment and techniques used are the same as for producing paper, there are enough differences in techniques used and other requirements for manufacture that a separate...

  8. A minimally invasive surgical approach for large cyst-like periapical lesions: a case series.

    PubMed

    Shah, Naseem; Logani, Ajay; Kumar, Vijay

    2014-01-01

    Various conservative approaches have been utilized to manage large periapical lesions. This article presents a relatively new, very conservative technique known as surgical fenestration which is both diagnostic and curative. The technique involves partially excising the cystic lining, gently curetting the cystic cavity, performing copious irrigation, and closing the surgical site. This technique allows for decompression and allows the clinician the freedom to take a biopsy of the lesion, as well as perform other procedures such as root resection and retrograde sealing, if required. As the procedure does not perform a complete excision of the cystic lining, it is both minimally invasive and cost-effective. The technique and the concepts involved are reviewed in 4 cases treated with this novel surgical approach.

  9. Compton imaging tomography technique for NDE of large nonuniform structures

    NASA Astrophysics Data System (ADS)

    Grubsky, Victor; Romanov, Volodymyr; Patton, Ned; Jannson, Tomasz

    2011-09-01

    In this paper we describe a new nondestructive evaluation (NDE) technique called Compton Imaging Tomography (CIT) for reconstructing the complete three-dimensional internal structure of an object, based on the registration of multiple two-dimensional Compton-scattered x-ray images of the object. CIT provides high resolution and sensitivity with virtually any material, including lightweight structures and organics, which normally pose problems in conventional x-ray computed tomography because of low contrast. The CIT technique requires only one-sided access to the object, has no limitation on the object's size, and can be applied to high-resolution real-time in situ NDE of large aircraft/spacecraft structures and components. Theoretical and experimental results will be presented.

  10. Explicit solution techniques for impact with contact constraints

    NASA Technical Reports Server (NTRS)

    Mccarty, Robert E.

    1993-01-01

    Modern military aircraft transparency systems, windshields and canopies, are complex systems which must meet a large and rapidly growing number of requirements. Many of these transparency system requirements are conflicting, presenting difficult balances which must be achieved. One example of a challenging requirements balance or trade is shaping for stealth versus aircrew vision. The large number of requirements involved may be grouped in a variety of areas including man-machine interface; structural integration with the airframe; combat hazards; environmental exposures; and supportability. Some individual requirements by themselves pose very difficult, severely nonlinear analysis problems. One such complex problem is that associated with the dynamic structural response resulting from high energy bird impact. An improved analytical capability for soft-body impact simulation was developed.

  11. Explicit solution techniques for impact with contact constraints

    NASA Astrophysics Data System (ADS)

    McCarty, Robert E.

    1993-08-01

    Modern military aircraft transparency systems, windshields and canopies, are complex systems which must meet a large and rapidly growing number of requirements. Many of these transparency system requirements are conflicting, presenting difficult balances which must be achieved. One example of a challenging requirements balance or trade is shaping for stealth versus aircrew vision. The large number of requirements involved may be grouped in a variety of areas including man-machine interface; structural integration with the airframe; combat hazards; environmental exposures; and supportability. Some individual requirements by themselves pose very difficult, severely nonlinear analysis problems. One such complex problem is that associated with the dynamic structural response resulting from high energy bird impact. An improved analytical capability for soft-body impact simulation was developed.

  12. Weight optimization of ultra large space structures

    NASA Technical Reports Server (NTRS)

    Reinert, R. P.

    1979-01-01

    The paper describes the optimization of a solar power satellite structure for minimum mass and system cost. The solar power satellite is an ultra large low frequency and lightly damped space structure; derivation of its structural design requirements required accommodation of gravity gradient torques which impose primary loads, life up to 100 years in the rigorous geosynchronous orbit radiation environment, and prevention of continuous wave motion in a solar array blanket suspended from a huge, lightly damped structure subject to periodic excitations. The satellite structural design required a parametric study of structural configurations and consideration of the fabrication and assembly techniques, which resulted in a final structure which met all requirements at a structural mass fraction of 10%.

  13. Weight and power savings shaft encoder interfacing techniques for aerospace applications

    NASA Technical Reports Server (NTRS)

    Breslow, Donald H.

    1986-01-01

    Many aerospace applications for shaft angle digitizers such as optical shaft encoders require special features that are not usually required on commercial products. Among the most important user considerations are the lowest possible weight and power consumption. A variety of mechanical and electrical interface techniques that have large potential weight and power savings are described. The principles to be presented apply to a wide variety of encoders, ranging from 16 to 22 bit resolution and with diameters from 152 to 380 mm (6 to 15 in.).

  14. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing.

    PubMed

    Xu, Jason; Minin, Vladimir N

    2015-07-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.

  15. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing

    PubMed Central

    Xu, Jason; Minin, Vladimir N.

    2016-01-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377

  16. An evaluation of HEMT potential for millimeter-wave signal sources using interpolation and harmonic balance techniques

    NASA Technical Reports Server (NTRS)

    Kwon, Youngwoo; Pavlidis, Dimitris; Tutt, Marcel N.

    1991-01-01

    A large-signal analysis method based on an harmonic balance technique and a 2-D cubic spline interpolation function has been developed and applied to the prediction of InP-based HEMT oscillator performance for frequencies extending up to the submillimeter-wave range. The large-signal analysis method uses a limited number of DC and small-signal S-parameter data and allows the accurate characterization of HEMT large-signal behavior. The method has been validated experimentally using load-pull measurement. Oscillation frequency, power performance, and load requirements are discussed, with an operation capability of 300 GHz predicted using state-of-the-art devices (fmax is approximately equal to 450 GHz).

  17. Patterning via optical saturable transitions

    NASA Astrophysics Data System (ADS)

    Cantu, Precious

    For the past 40 years, optical lithography has been the patterning workhorse for the semiconductor industry. However, as integrated circuits have become more and more complex, and as device geometries shrink, more innovative methods are required to meet these needs. In the far-field, the smallest feature that can be generated with light is limited to approximately half the wavelength. This, so called far-field diffraction limit or the Abbe limit (after Prof. Ernst Abbe who first recognized this), effectively prevents the use of long-wavelength photons >300nm from patterning nanostructures <100nm. Even with a 193nm laser source and extremely complicated processing, patterns below ˜20nm are incredibly challenging to create. Sources with even shorter wavelengths can potentially be used. However, these tend be much more expensive and of much lower brightness, which in turn limits their patterning speed. Multi-photon reactions have been proposed to overcome the diffraction limit. However, these require very large intensities for modest gain in resolution. Moreover, the large intensities make it difficult to parallelize, thus limiting the patterning speed. In this dissertation, a novel nanopatterning technique using wavelength-selective small molecules that undergo single-photon reactions, enabling rapid top-down nanopatterning over large areas at low-light intensities, thereby allowing for the circumvention of the far-field diffraction barrier is developed and experimentally verified. This approach, which I refer to as Patterning via Optical Saturable Transitions (POST) has the potential for massive parallelism, enabling the creation of nanostructures and devices at a speed far surpassing what is currently possible with conventional optical lithographic techniques. The fundamental understanding of this technique goes beyond optical lithography in the semiconductor industry and is applicable to any area that requires the rapid patterning of large-area two or three-dimensional complex geometries. At a basic level, this research intertwines the fields of electrochemistry, material science, electrical engineering, optics, physics, and mechanical engineering with the goal of developing a novel super-resolution lithographic technique.

  18. Evaluating privacy-preserving record linkage using cryptographic long-term keys and multibit trees on large medical datasets.

    PubMed

    Brown, Adrian P; Borgs, Christian; Randall, Sean M; Schnell, Rainer

    2017-06-08

    Integrating medical data using databases from different sources by record linkage is a powerful technique increasingly used in medical research. Under many jurisdictions, unique personal identifiers needed for linking the records are unavailable. Since sensitive attributes, such as names, have to be used instead, privacy regulations usually demand encrypting these identifiers. The corresponding set of techniques for privacy-preserving record linkage (PPRL) has received widespread attention. One recent method is based on Bloom filters. Due to superior resilience against cryptographic attacks, composite Bloom filters (cryptographic long-term keys, CLKs) are considered best practice for privacy in PPRL. Real-world performance of these techniques using large-scale data is unknown up to now. Using a large subset of Australian hospital admission data, we tested the performance of an innovative PPRL technique (CLKs using multibit trees) against a gold-standard derived from clear-text probabilistic record linkage. Linkage time and linkage quality (recall, precision and F-measure) were evaluated. Clear text probabilistic linkage resulted in marginally higher precision and recall than CLKs. PPRL required more computing time but 5 million records could still be de-duplicated within one day. However, the PPRL approach required fine tuning of parameters. We argue that increased privacy of PPRL comes with the price of small losses in precision and recall and a large increase in computational burden and setup time. These costs seem to be acceptable in most applied settings, but they have to be considered in the decision to apply PPRL. Further research on the optimal automatic choice of parameters is needed.

  19. Characterization of Ultra-fine Grained and Nanocrystalline Materials Using Transmission Kikuchi Diffraction

    PubMed Central

    Proust, Gwénaëlle; Trimby, Patrick; Piazolo, Sandra; Retraint, Delphine

    2017-01-01

    One of the challenges in microstructure analysis nowadays resides in the reliable and accurate characterization of ultra-fine grained (UFG) and nanocrystalline materials. The traditional techniques associated with scanning electron microscopy (SEM), such as electron backscatter diffraction (EBSD), do not possess the required spatial resolution due to the large interaction volume between the electrons from the beam and the atoms of the material. Transmission electron microscopy (TEM) has the required spatial resolution. However, due to a lack of automation in the analysis system, the rate of data acquisition is slow which limits the area of the specimen that can be characterized. This paper presents a new characterization technique, Transmission Kikuchi Diffraction (TKD), which enables the analysis of the microstructure of UFG and nanocrystalline materials using an SEM equipped with a standard EBSD system. The spatial resolution of this technique can reach 2 nm. This technique can be applied to a large range of materials that would be difficult to analyze using traditional EBSD. After presenting the experimental set up and describing the different steps necessary to realize a TKD analysis, examples of its use on metal alloys and minerals are shown to illustrate the resolution of the technique and its flexibility in term of material to be characterized. PMID:28447998

  20. Characterization of Ultra-fine Grained and Nanocrystalline Materials Using Transmission Kikuchi Diffraction.

    PubMed

    Proust, Gwénaëlle; Trimby, Patrick; Piazolo, Sandra; Retraint, Delphine

    2017-04-01

    One of the challenges in microstructure analysis nowadays resides in the reliable and accurate characterization of ultra-fine grained (UFG) and nanocrystalline materials. The traditional techniques associated with scanning electron microscopy (SEM), such as electron backscatter diffraction (EBSD), do not possess the required spatial resolution due to the large interaction volume between the electrons from the beam and the atoms of the material. Transmission electron microscopy (TEM) has the required spatial resolution. However, due to a lack of automation in the analysis system, the rate of data acquisition is slow which limits the area of the specimen that can be characterized. This paper presents a new characterization technique, Transmission Kikuchi Diffraction (TKD), which enables the analysis of the microstructure of UFG and nanocrystalline materials using an SEM equipped with a standard EBSD system. The spatial resolution of this technique can reach 2 nm. This technique can be applied to a large range of materials that would be difficult to analyze using traditional EBSD. After presenting the experimental set up and describing the different steps necessary to realize a TKD analysis, examples of its use on metal alloys and minerals are shown to illustrate the resolution of the technique and its flexibility in term of material to be characterized.

  1. Surgical repair of large cyclodialysis clefts.

    PubMed

    Gross, Jacob B; Davis, Garvin H; Bell, Nicholas P; Feldman, Robert M; Blieden, Lauren S

    2017-05-11

    To describe a new surgical technique to effectively close large (>180 degrees) cyclodialysis clefts. Our method involves the use of procedures commonly associated with repair of retinal detachment and complex cataract extraction: phacoemulsification with placement of a capsular tension ring followed by pars plana vitrectomy and gas tamponade with light cryotherapy. We also used anterior segment optical coherence tomography (OCT) as a noninvasive mechanism to determine the extent of the clefts and compared those results with ultrasound biomicroscopy (UBM) and gonioscopy. This technique was used to repair large cyclodialysis clefts in 4 eyes. All 4 eyes had resolution of hypotony and improvement of visual acuity. One patient had an intraocular pressure spike requiring further surgical intervention. Anterior segment OCT imaging in all 4 patients showed a more extensive cleft than UBM or gonioscopy. This technique is effective in repairing large cyclodialysis clefts. Anterior segment OCT more accurately predicted the extent of each cleft, while UBM and gonioscopy both underestimated the size of the cleft.

  2. Dynamic Relaxation: A Technique for Detailed Thermo-Elastic Structural Analysis of Transportation Structures

    NASA Astrophysics Data System (ADS)

    Shoukry, Samir N.; William, Gergis W.; Riad, Mourad Y.; McBride, Kevyn C.

    2006-08-01

    Dynamic relaxation is a technique developed to solve static problems through an explicit integration in finite element. The main advantage of such a technique is the ability to solve a large problem in a relatively short time compared with the traditional implicit techniques, especially when using nonlinear material models. This paper describes the use of such a technique in analyzing large transportation structures as dowel jointed concrete pavements and 306-m-long, reinforced concrete bridge superstructure under the effect of temperature variations. The main feature of the pavement model is the detailed modeling of dowel bars and their interfaces with the surrounding concrete using extremely fine mesh of solid elements, while in the bridge structure it is the detailed modeling of the girder-deck interface as well as the bracing members between the girders. The 3DFE results were found to be in a good agreement with experimentally measured data obtained from an instrumented pavements sections and a highway bridge constructed in West Virginia. Thus, such a technique provides a good tool for analyzing the response of large structures to static loads in a fraction of the time required by traditional, implicit finite element methods.

  3. Laparoendoscopic Single-Site Cholecystectomy: First Experiences with a New Standardized Technique Replicating the Four-Port Technique.

    PubMed

    Morales-Conde, Salvador; Cañete-Gómez, Jesús; Gómez, Virginia; Socas Macías, María; Moreno, Antonio Barranco; Del Agua, Isaias Alarcón; Ruíz, Francisco Javier Padillo

    2016-10-01

    After reports on laparoendoscopic single-site (LESS) cholecystectomy, concerns have been raised over the level of difficulty and a potential increase in complications when moving away from conventional gold standard multiport laparoscopy due to incomplete exposure and larger umbilical incisions. With continued development of technique and technology, it has now become possible to fully replicate this gold standard procedure through an LESS approach. First experiences with the newly developed technique and instrument are reported. Fifteen patients presenting with cholelithiasis without signs of inflammation were operated using all surgical steps considered appropriate for the conventional four-port laparoscopic approach, but applied through a single access device. Operation-centered outcomes are presented. There were no peri- or postoperative complications. Mean operating time was 32.3 minutes. No conversion to regular laparoscopy was required. The critical view of safety was achieved in all cases. Mean skin incision length was 2.2 cm. The application of a standardized technique combined with the use of a four-port LESS device allows us to perform LESS cholecystectomy, giving us a correct exposure of the structures and without increasing the mean operating time combining previously reported advantages of LESS. A universal trait of any new technique should be safety and reproducibility. This will enhance its applicability by large number of surgeons and to large number of patients requiring cholecystectomy.

  4. Space Construction Automated Fabrication Experiment Definition Study (SCAFEDS), part 2

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The techniques, processes, and equipment required for automatic fabrication and assembly of structural elements in using Shuttle as a launch vehicle, and construction were defined. Additional construction systems operational techniques, processes, and equipment which can be developed and demonstrated in the same program to provide further risk reduction benefits to future large space systems were identified and examined.

  5. Macro-microscopic anatomy: obtaining a composite view of barrier zone formation in Acer saccharum

    Treesearch

    Kenneth Dudzik

    1988-01-01

    The technique for constructing a montage of large wood sections cut on a sliding microtome is discussed. Briefly, the technique involves photographing many serial micrographs in a pattern under a light microscope similar to the way flight lines are run in aerial photography. Assembly of the resulting overlapping photographs requires careful trimming. A composite of...

  6. Minimally Invasive Removal of an Intradural Cervical Tumor : Assessment of a Combined Split-Spinous Laminectomy and Quadrant Tube Retractor System Technique

    PubMed Central

    Kwak, Young-Seok; Cho, Dae-Chul; Kim, Young-Baeg

    2012-01-01

    Conventional laminectomy is the most popular technique for the complete removal of intradural spinal tumors. In particular, the central portion intramedullary tumor and large intradural extramedullary tumor often require a total laminectomy for the midline myelotomy, sufficient decompression, and adequate visualization. However, this technique has the disadvantages of a wide incision, extensive periosteal muscle dissection, and bony structural injury. Recently, split-spinous laminectomy and tubular retractor systems were found to decrease postoperative muscle injuries, skin incision size and discomfort. The combined technique of split-spinous laminectomy, using a quadrant tube retractor system allows for an excellent exposure of the tumor with minimal trauma of the surrounding tissue. We propose that this technique offers possible advantages over the traditional open tumor removal of the intradural spinal cord tumors, which covers one or two cervical levels and requires a total laminectomy. PMID:23133739

  7. Measurement of surface microtopography

    NASA Technical Reports Server (NTRS)

    Wall, S. D.; Farr, T. G.; Muller, J.-P.; Lewis, P.; Leberl, F. W.

    1991-01-01

    Acquisition of ground truth data for use in microwave interaction modeling requires measurement of surface roughness sampled at intervals comparable to a fraction of the microwave wavelength and extensive enough to adequately represent the statistics of a surface unit. Sub-centimetric measurement accuracy is thus required over large areas, and existing techniques are usually inadequate. A technique is discussed for acquiring the necessary photogrammetric data using twin film cameras mounted on a helicopter. In an attempt to eliminate tedious data reduction, an automated technique was applied to the helicopter photographs, and results were compared to those produced by conventional stereogrammetry. Derived root-mean-square (RMS) roughness for the same stereo-pair was 7.5 cm for the automated technique versus 6.5 cm for the manual method. The principal source of error is probably due to vegetation in the scene, which affects the automated technique but is ignored by a human operator.

  8. Matrix Perturbation Techniques in Structural Dynamics

    NASA Technical Reports Server (NTRS)

    Caughey, T. K.

    1973-01-01

    Matrix perturbation are developed techniques which can be used in the dynamical analysis of structures where the range of numerical values in the matrices extreme or where the nature of the damping matrix requires that complex valued eigenvalues and eigenvectors be used. The techniques can be advantageously used in a variety of fields such as earthquake engineering, ocean engineering, aerospace engineering and other fields concerned with the dynamical analysis of large complex structures or systems of second order differential equations. A number of simple examples are included to illustrate the techniques.

  9. Model correlation and damage location for large space truss structures: Secant method development and evaluation

    NASA Technical Reports Server (NTRS)

    Smith, Suzanne Weaver; Beattie, Christopher A.

    1991-01-01

    On-orbit testing of a large space structure will be required to complete the certification of any mathematical model for the structure dynamic response. The process of establishing a mathematical model that matches measured structure response is referred to as model correlation. Most model correlation approaches have an identification technique to determine structural characteristics from the measurements of the structure response. This problem is approached with one particular class of identification techniques - matrix adjustment methods - which use measured data to produce an optimal update of the structure property matrix, often the stiffness matrix. New methods were developed for identification to handle problems of the size and complexity expected for large space structures. Further development and refinement of these secant-method identification algorithms were undertaken. Also, evaluation of these techniques is an approach for model correlation and damage location was initiated.

  10. Momentum flux measurements: Techniques and needs, part 4.5A

    NASA Technical Reports Server (NTRS)

    Fritts, D. C.

    1984-01-01

    The vertical flux of horizontal momentum by internal gravity waves is now recognized to play a significant role in the large-scale circulation and thermal structure of the middle atmosphere. This is because a divergence of momentum flux due to wave dissipation results in an acceleration of the local mean flow towards the phase speed of the gravity wave. Such mean flow acceleration are required to offset the large zonal accelerations driven by Coriolis torques acting on the diabatic meridional circulation. Techniques and observations regarding the momentum flux distribution in the middle atmosphere are discussed.

  11. Application of AIS Technology to Forest Mapping

    NASA Technical Reports Server (NTRS)

    Yool, S. R.; Star, J. L.

    1985-01-01

    Concerns about environmental effects of large scale deforestation have prompted efforts to map forests over large areas using various remote sensing data and image processing techniques. Basic research on the spectral characteristics of forest vegetation are required to form a basis for development of new techniques, and for image interpretation. Examination of LANDSAT data and image processing algorithms over a portion of boreal forest have demonstrated the complexity of relations between the various expressions of forest canopies, environmental variability, and the relative capacities of different image processing algorithms to achieve high classification accuracies under these conditions. Airborne Imaging Spectrometer (AIS) data may in part provide the means to interpret the responses of standard data and techniques to the vegetation based on its relatively high spectral resolution.

  12. Correction for intravascular activity in the oxygen-15 steady-state technique is independent of the regional hematocrit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lammertsma, A.A.; Baron, J.C.; Jones, T.

    1987-06-01

    The oxygen-15 steady-state technique to measure the regional cerebral metabolic rate for oxygen requires a correction for the nonextracted intravascular molecular oxygen-15. To perform this correction, an additional procedure is carried out using RBCs labeled with /sup 11/CO or C/sup 15/O. The previously reported correction method, however, required knowledge of the regional cerebral to large vessel hematocrit ratio. A closer examination of the underlying model eliminated this ratio. Both molecular oxygen and carbon monoxide are carried by RBCs and are therefore similarly affected by a change in hematocrit.

  13. Improved microseismic event locations through large-N arrays and wave-equation imaging and inversion

    NASA Astrophysics Data System (ADS)

    Witten, B.; Shragge, J. C.

    2016-12-01

    The recent increased focus on small-scale seismicity, Mw < 4 has come about primarily for two reasons. First, there is an increase in induced seismicity related to injection operations primarily for wastewater disposal and hydraulic fracturing for oil and gas recovery and for geothermal energy production. While the seismicity associated with injection is sometimes felt, it is more often weak. Some weak events are detected on current sparse arrays; however, accurate location of the events often requires a larger number of (multi-component) sensors. This leads to the second reason for an increased focus on small magnitude seismicity: a greater number of seismometers are being deployed in large N-arrays. The greater number of sensors decreases the detection threshold and therefore significantly increases the number of weak events found. Overall, these two factors bring new challenges and opportunities. Many standard seismological location and inversion techniques are geared toward large, easily identifiable events recorded on a sparse number of stations. However, with large-N arrays we can detect small events by utilizing multi-trace processing techniques, and increased processing power equips us with tools that employ more complete physics for simultaneously locating events and inverting for P- and S-wave velocity structure. We present a method that uses large-N arrays and wave-equation-based imaging and inversion to jointly locate earthquakes and estimate the elastic velocities of the earth. The technique requires no picking and is thus suitable for weak events. We validate the methodology through synthetic and field data examples.

  14. Consistent detection and identification of individuals in a large camera network

    NASA Astrophysics Data System (ADS)

    Colombo, Alberto; Leung, Valerie; Orwell, James; Velastin, Sergio A.

    2007-10-01

    In the wake of an increasing number of terrorist attacks, counter-terrorism measures are now a main focus of many research programmes. An important issue for the police is the ability to track individuals and groups reliably through underground stations, and in the case of post-event analysis, to be able to ascertain whether specific individuals have been at the station previously. While there exist many motion detection and tracking algorithms, the reliable deployment of them in a large network is still ongoing research. Specifically, to track individuals through multiple views, on multiple levels and between levels, consistent detection and labelling of individuals is crucial. In view of these issues, we have developed a change detection algorithm to work reliably in the presence of periodic movements, e.g. escalators and scrolling advertisements, as well as a content-based retrieval technique for identification. The change detection technique automatically extracts periodically varying elements in the scene using Fourier analysis, and constructs a Markov model for the process. Training is performed online, and no manual intervention is required, making this system suitable for deployment in large networks. Experiments on real data shows significant improvement over existing techniques. The content-based retrieval technique uses MPEG-7 descriptors to identify individuals. Given the environment under which the system operates, i.e. at relatively low resolution, this approach is suitable for short timescales. For longer timescales, other forms of identification such as gait, or if the resolution allows, face recognition, will be required.

  15. Task Effects on Linguistic Complexity and Accuracy: A Large-Scale Learner Corpus Analysis Employing Natural Language Processing Techniques

    ERIC Educational Resources Information Center

    Alexopoulou, Theodora; Michel, Marije; Murakami, Akira; Meurers, Detmar

    2017-01-01

    Large-scale learner corpora collected from online language learning platforms, such as the EF-Cambridge Open Language Database (EFCAMDAT), provide opportunities to analyze learner data at an unprecedented scale. However, interpreting the learner language in such corpora requires a precise understanding of tasks: How does the prompt and input of a…

  16. Multi-object investigation using two-wavelength phase-shift interferometry guided by an optical frequency comb

    NASA Astrophysics Data System (ADS)

    Ibrahim, Dahi Ghareab Abdelsalam; Yasui, Takeshi

    2018-04-01

    Two-wavelength phase-shift interferometry guided by optical frequency combs is presented. We demonstrate the operation of the setup with a large step sample simultaneously with a resolution test target with a negative pattern. The technique can investigate multi-objects simultaneously with high precision. Using this technique, several important applications in metrology that require high speed and precision are demonstrated.

  17. PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid

    1998-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.

  18. Freeform diamond machining of complex monolithic metal optics for integral field systems

    NASA Astrophysics Data System (ADS)

    Dubbeldam, Cornelis M.; Robertson, David J.; Preuss, Werner

    2004-09-01

    Implementation of the optical designs of image slicing Integral Field Systems requires accurate alignment of a large number of small (and therefore difficult to manipulate) optical components. In order to facilitate the integration of these complex systems, the Astronomical Instrumentation Group (AIG) of the University of Durham, in collaboration with the Labor für Mikrozerspanung (Laboratory for Precision Machining - LFM) of the University of Bremen, have developed a technique for fabricating monolithic multi-faceted mirror arrays using freeform diamond machining. Using this technique, the inherent accuracy of the diamond machining equipment is exploited to achieve the required relative alignment accuracy of the facets, as well as an excellent optical surface quality for each individual facet. Monolithic arrays manufactured using this freeform diamond machining technique were successfully applied in the Integral Field Unit for the GEMINI Near-InfraRed Spectrograph (GNIRS IFU), which was recently installed at GEMINI South. Details of their fabrication process and optical performance are presented in this paper. In addition, the direction of current development work, conducted under the auspices of the Durham Instrumentation R&D Program supported by the UK Particle Physics and Astronomy Research Council (PPARC), will be discussed. The main emphasis of this research is to improve further the optical performance of diamond machined components, as well as to streamline the production and quality control processes with a view to making this technique suitable for multi-IFU instruments such as KMOS etc., which require series production of large quantities of optical components.

  19. Autonomous Object Characterization with Large Datasets

    DTIC Science & Technology

    2015-10-18

    desk, where a substantial amount of effort is required to transform raw photometry into a data product, minimizing the amount of time the analyst has...were used to explore concepts in satellite characterization and satellite state change. The first algorithm provides real- time stability estimation... Timely and effective space object (SO) characterization is a challenge, and requires advanced data processing techniques. Detection and identification

  20. Panorama of Reconstruction of Skull Base Defects: From Traditional Open to Endonasal Endoscopic Approaches, from Free Grafts to Microvascular Flaps

    PubMed Central

    Reyes, Camilo; Mason, Eric; Solares, C. Arturo

    2014-01-01

    Introduction A substantial body of literature has been devoted to the distinct characteristics and surgical options to repair the skull base. However, the skull base is an anatomically challenging location that requires a three-dimensional reconstruction approach. Furthermore, advances in endoscopic skull base surgery encompass a wide range of surgical pathology, from benign tumors to sinonasal cancer. This has resulted in the creation of wide defects that yield a new challenge in skull base reconstruction. Progress in technology and imaging has made this approach an internationally accepted method to repair these defects. Objectives Discuss historical developments and flaps available for skull base reconstruction. Data Synthesis Free grafts in skull base reconstruction are a viable option in small defects and low-flow leaks. Vascularized flaps pose a distinct advantage in large defects and high-flow leaks. When open techniques are used, free flap reconstruction techniques are often necessary to repair large entry wound defects. Conclusions Reconstruction of skull base defects requires a thorough knowledge of surgical anatomy, disease, and patient risk factors associated with high-flow cerebrospinal fluid leaks. Various reconstruction techniques are available, from free tissue grafting to vascularized flaps. Possible complications that can befall after these procedures need to be considered. Although endonasal techniques are being used with increasing frequency, open techniques are still necessary in selected cases. PMID:25992142

  1. Empirical gradient threshold technique for automated segmentation across image modalities and cell lines.

    PubMed

    Chalfoun, J; Majurski, M; Peskin, A; Breen, C; Bajcsy, P; Brady, M

    2015-10-01

    New microscopy technologies are enabling image acquisition of terabyte-sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21,000×21,000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user-set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re-adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference data set with a 10-fold cross-validation method. EGT segments cells or colonies with resulting Dice accuracy index measurements above 0.92 for all cross-validation data sets. EGT results has also been visually verified on a much larger data set that includes bright field and Differential Interference Contrast (DIC) images, 16 cell lines and 61 time-sequence data sets, for a total of 17,479 images. This method is implemented as an open-source plugin to ImageJ as well as a standalone executable that can be downloaded from the following link: https://isg.nist.gov/. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  2. An efficient method for removing point sources from full-sky radio interferometric maps

    NASA Astrophysics Data System (ADS)

    Berger, Philippe; Oppermann, Niels; Pen, Ue-Li; Shaw, J. Richard

    2017-12-01

    A new generation of wide-field radio interferometers designed for 21-cm surveys is being built as drift scan instruments allowing them to observe large fractions of the sky. With large numbers of antennas and frequency channels, the enormous instantaneous data rates of these telescopes require novel, efficient, data management and analysis techniques. The m-mode formalism exploits the periodicity of such data with the sidereal day, combined with the assumption of statistical isotropy of the sky, to achieve large computational savings and render optimal analysis methods computationally tractable. We present an extension to that work that allows us to adopt a more realistic sky model and treat objects such as bright point sources. We develop a linear procedure for deconvolving maps, using a Wiener filter reconstruction technique, which simultaneously allows filtering of these unwanted components. We construct an algorithm, based on the Sherman-Morrison-Woodbury formula, to efficiently invert the data covariance matrix, as required for any optimal signal-to-noise ratio weighting. The performance of our algorithm is demonstrated using simulations of a cylindrical transit telescope.

  3. Feasibility study of a synthesis procedure for array feeds to improve radiation performance of large distorted reflector antennas

    NASA Technical Reports Server (NTRS)

    Stutzman, W. L.; Smith, W. T.

    1990-01-01

    Surface errors on parabolic reflector antennas degrade the overall performance of the antenna. Space antenna structures are difficult to build, deploy and control. They must maintain a nearly perfect parabolic shape in a harsh environment and must be lightweight. Electromagnetic compensation for surface errors in large space reflector antennas can be used to supplement mechanical compensation. Electromagnetic compensation for surface errors in large space reflector antennas has been the topic of several research studies. Most of these studies try to correct the focal plane fields of the reflector near the focal point and, hence, compensate for the distortions over the whole radiation pattern. An alternative approach to electromagnetic compensation is presented. The proposed technique uses pattern synthesis to compensate for the surface errors. The pattern synthesis approach uses a localized algorithm in which pattern corrections are directed specifically towards portions of the pattern requiring improvement. The pattern synthesis technique does not require knowledge of the reflector surface. It uses radiation pattern data to perform the compensation.

  4. Metal stack optimization for low-power and high-density for N7-N5

    NASA Astrophysics Data System (ADS)

    Raghavan, P.; Firouzi, F.; Matti, L.; Debacker, P.; Baert, R.; Sherazi, S. M. Y.; Trivkovic, D.; Gerousis, V.; Dusa, M.; Ryckaert, J.; Tokei, Z.; Verkest, D.; McIntyre, G.; Ronse, K.

    2016-03-01

    One of the key challenges while scaling logic down to N7 and N5 is the requirement of self-aligned multiple patterning for the metal stack. This comes with a large cost of the backend cost and therefore a careful stack optimization is required. Various layers in the stack have different purposes and therefore their choice of pitch and number of layers is critical. Furthermore, when in ultra scaled dimensions of N7 or N5, the number of patterning options are also much larger ranging from multiple LE, EUV to SADP/SAQP. The right choice of these are also needed patterning techniques that use a full grating of wires like SADP/SAQP techniques introduce high level of metal dummies into the design. This implies a large capacitance penalty to the design therefore having large performance and power penalties. This is often mitigated with extra masking strategies. This paper discusses a holistic view of metal stack optimization from standard cell level all the way to routing and the corresponding trade-off that exist for this space.

  5. Fracture toughness testing on ferritic alloys using the electropotential technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, F.H.; Wire, G.L.

    1981-06-11

    Fracture toughness measurements as done conventionally require large specimens (5 x 5 x 2.5 cm) which would be prohibitively expensive to irradiate over the fluence and temperature ranges required for first wall design. To overcome this difficulty a single specimen technique for J intergral fracture toughness measurements on miniature specimens (1.6 cm OD x 0.25 cm thick) was developed. Comparisons with specimens three times as thick show that the derived J/sub 1c/ is constant, validating the specimen for first wall applications. The electropotential technique was used to obtain continuous crack extension measurements, allowing a ductile fracture resistence curve to bemore » constructed from a single specimen. The irradiation test volume required for fracture toughness measurements using both miniature specimens and single specimen J measurements was reduced a factor of 320, making it possible to perform a systematic exploration of irradiation temperature and dose variables as required for qualification of HT-9 and 9Cr-1Mo base metal and welds for first wall application. Fracture toughness test results for HT-9 and 9Cr-1Mo from 25 to 539/sup 0/C are presented to illustrate the single specimen technique.« less

  6. Proactive Security Testing and Fuzzing

    NASA Astrophysics Data System (ADS)

    Takanen, Ari

    Software is bound to have security critical flaws, and no testing or code auditing can ensure that software is flaw-less. But software security testing requirements have improved radically during the past years, largely due to criticism from security conscious consumers and Enterprise customers. Whereas in the past, security flaws were taken for granted (and patches were quietly and humbly installed), they now are probably one of the most common reasons why people switch vendors or software providers. The maintenance costs from security updates often add to become one of the biggest cost items to large Enterprise users. Fortunately test automation techniques have also improved. Techniques like model-based testing (MBT) enable efficient generation of security tests that reach good confidence levels in discovering zero-day mistakes in software. This technique is called fuzzing.

  7. SWAT: Model use, calibration, and validation

    USDA-ARS?s Scientific Manuscript database

    SWAT (Soil and Water Assessment Tool) is a comprehensive, semi-distributed river basin model that requires a large number of input parameters which complicates model parameterization and calibration. Several calibration techniques have been developed for SWAT including manual calibration procedures...

  8. The Development and Hover Test Application of a Projection Moire Interferometry Blade Displacement Measurement System

    NASA Technical Reports Server (NTRS)

    Sekula, Martin K.

    2012-01-01

    Projection moir interferometry (PMI) was employed to measure blade deflections during a hover test of a generic model-scale rotor in the NASA Langley 14x22 subsonic wind tunnel s hover facility. PMI was one of several optical measurement techniques tasked to acquire deflection and flow visualization data for a rotor at several distinct heights above a ground plane. Two of the main objectives of this test were to demonstrate that multiple optical measurement techniques can be used simultaneously to acquire data and to identify and address deficiencies in the techniques. Several PMI-specific technical challenges needed to be addressed during the test and in post-processing of the data. These challenges included developing an efficient and accurate calibration method for an extremely large (65 inch) height range; automating the analysis of the large amount of data acquired during the test; and developing a method to determinate the absolute displacement of rotor blades without a required anchor point measurement. The results indicate that the use of a single-camera/single-projector approach for the large height range reduced the accuracy of the PMI system compared to PMI systems designed for smaller height ranges. The lack of the anchor point measurement (due to a technical issue with one of the other measurement techniques) limited the ability of the PMI system to correctly measure blade displacements to only one of the three rotor heights tested. The new calibration technique reduced the data required by 80 percent while new post-processing algorithms successfully automated the process of locating rotor blades in images, determining the blade quarter chord location, and calculating the blade root and blade tip heights above the ground plane.

  9. Wind turbine siting: A summary of the state of the art

    NASA Technical Reports Server (NTRS)

    Hiester, T. R.

    1982-01-01

    The process of siting large wind turbines may be divided into two broad steps: site selection, and site evaluation. Site selection is the process of locating windy sites where wind energy development shows promise of economic viability. Site evaluation is the process of determining in detail for a given site the economic potential of the site. The state of the art in the first aspect of siting, site selection is emphasized. Several techniques for assessing the wind resource were explored or developed in the Federal Wind Energy Program. Local topography and meteorology will determine which of the techniques should be used in locating potential sites. None of the techniques can do the job alone, none are foolproof, and all require considerable knowledge and experience to apply correctly. Therefore, efficient siting requires a strategy which is founded on broad based application of several techniques without relying solely on one narrow field of expertise.

  10. Interactive Display of Surfaces Using Subdivision Surfaces and Wavelets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duchaineau, M A; Bertram, M; Porumbescu, S

    2001-10-03

    Complex surfaces and solids are produced by large-scale modeling and simulation activities in a variety of disciplines. Productive interaction with these simulations requires that these surfaces or solids be viewable at interactive rates--yet many of these surfaced solids can contain hundreds of millions of polygondpolyhedra. Interactive display of these objects requires compression techniques to minimize storage, and fast view-dependent triangulation techniques to drive the graphics hardware. In this paper, we review recent advances in subdivision-surface wavelet compression and optimization that can be used to provide a framework for both compression and triangulation. These techniques can be used to produce suitablemore » approximations of complex surfaces of arbitrary topology, and can be used to determine suitable triangulations for display. The techniques can be used in a variety of applications in computer graphics, computer animation and visualization.« less

  11. Laboratory Spectrometer for Wear Metal Analysis of Engine Lubricants.

    DTIC Science & Technology

    1986-04-01

    analysis, the acid digestion technique for sample pretreatment is the best approach available to date because of its relatively large sample size (1000...microliters or more). However, this technique has two major shortcomings limiting its application: (1) it requires the use of hydrofluoric acid (a...accuracy. Sample preparation including filtration or acid digestion may increase analysis times by 20 minutes or more. b. Repeatability In the analysis

  12. Developing NDE Techniques for Large Cryogenic Tanks

    NASA Technical Reports Server (NTRS)

    Parker, Don; Starr, Stan; Arens, Ellen

    2011-01-01

    The Shuttle Program requires very large cryogenic ground storage tanks in which to store liquid oxygen and hydrogen. The existing Pads A and B Launch Complex-39 tanks, which will be passed onto future launch programs, are 45 years old and have received minimal refurbishment and only external inspections over the years. The majority of the structure is inaccessible without a full system drain of cryogenic liquid and granular insulation in the annular region. It was previously thought that there was a limit to the number of temperature cycles that the tanks could handle due to possible insulation compaction before undergoing a costly and time consuming complete overhaul; therefore the tanks were not drained and performance issues with these tanks, specifically the Pad B liquid hydrogen tank, were accepted. There is a needind an opportunity, as the Shuttle program ends and work to upgrade the launch pads progresses, to develop innovative non-destructive evaluation (NDE) techniques to analyze the current tanks. Techniques are desired that can aid in determining the extent of refurbishment required to keep the tanks in service for another 20+ years. A nondestructive technique would also be a significant aid in acceptance testing of new and refurbished tanks, saving significant time and money, if corrective actions can be taken before cryogen is introduced to the systems.

  13. Two pass method and radiation interchange processing when applied to thermal-structural analysis of large space truss structures

    NASA Technical Reports Server (NTRS)

    Warren, Andrew H.; Arelt, Joseph E.; Lalicata, Anthony L.; Rogers, Karen M.

    1993-01-01

    A method of efficient and automated thermal-structural processing of very large space structures is presented. The method interfaces the finite element and finite difference techniques. It also results in a pronounced reduction of the quantity of computations, computer resources and manpower required for the task, while assuring the desired accuracy of the results.

  14. Streak Imaging Flow Cytometer for Rare Cell Analysis.

    PubMed

    Balsam, Joshua; Bruck, Hugh Alan; Ossandon, Miguel; Prickril, Ben; Rasooly, Avraham

    2017-01-01

    There is a need for simple and affordable techniques for cytology for clinical applications, especially for point-of-care (POC) medical diagnostics in resource-poor settings. However, this often requires adapting expensive and complex laboratory-based techniques that often require significant power and are too massive to transport easily. One such technique is flow cytometry, which has great potential for modification due to the simplicity of the principle of optical tracking of cells. However, it is limited in that regard due to the flow focusing technique used to isolate cells for optical detection. This technique inherently reduces the flow rate and is therefore unsuitable for rapid detection of rare cells which require large volume for analysis.To address these limitations, we developed a low-cost, mobile flow cytometer based on streak imaging. In our new configuration we utilize a simple webcam for optical detection over a large area associated with a wide-field flow cell. The new flow cell is capable of larger volume and higher throughput fluorescence detection of rare cells than the flow cells with hydrodynamic focusing used in conventional flow cytometry. The webcam is an inexpensive, commercially available system, and for fluorescence analysis we use a 1 W 450 nm blue laser to excite Syto-9 stained cells with emission at 535 nm. We were able to detect low concentrations of stained cells at high flow rates of 10 mL/min, which is suitable for rapidly analyzing larger specimen volumes to detect rare cells at appropriate concentration levels. The new rapid detection capabilities, combined with the simplicity and low cost of this device, suggest a potential for clinical POC flow cytometry in resource-poor settings associated with global health.

  15. Thermographic Imaging of Defects in Anisotropic Composites

    NASA Technical Reports Server (NTRS)

    Plotnikov, Y. A.; Winfree, W. P.

    2000-01-01

    Composite materials are of increasing interest to the aerospace industry as a result of their weight versus performance characteristics. One of the disadvantages of composites is the high cost of fabrication and post inspection with conventional ultrasonic scanning systems. The high cost of inspection is driven by the need for scanning systems which can follow large curve surfaces. Additionally, either large water tanks or water squirters are required to couple the ultrasonics into the part. Thermographic techniques offer significant advantages over conventional ultrasonics by not requiring physical coupling between the part and sensor. The thermographic system can easily inspect large curved surface without requiring a surface following scanner. However, implementation of Thermal Nondestructive Evaluations (TNDE) for flaw detection in composite materials and structures requires determining its limit. Advanced algorithms have been developed to enable locating and sizing defects in carbon fiber reinforced plastic (CFRP). Thermal Tomography is a very promising method for visualizing the size and location of defects in materials such as CFRP. However, further investigations are required to determine its capabilities for inspection of thick composites. In present work we have studied influence of the anisotropy on the reconstructed image of a defect generated by an inversion technique. The composite material is considered as homogeneous with macro properties: thermal conductivity K, specific heat c, and density rho. The simulation process involves two sequential steps: solving the three dimensional transient heat diffusion equation for a sample with a defect, then estimating the defect location and size from the surface spatial and temporal thermal distributions (inverse problem), calculated from the simulations.

  16. Development and analysis of SCR requirements tables for system scenarios

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Morrison, Jeffery L.

    1995-01-01

    We describe the use of scenarios to develop and refine requirement tables for parts of the Earth Observing System Data and Information System (EOSDIS). The National Aeronautics and Space Administration (NASA) is developing EOSDIS as part of its Mission-To-Planet-Earth (MTPE) project to accept instrument/platform observation requests from end-user scientists, schedule and perform requested observations of the Earth from space, collect and process the observed data, and distribute data to scientists and archives. Current requirements for the system are managed with tools that allow developers to trace the relationships between requirements and other development artifacts, including other requirements. In addition, the user community (e.g., earth and atmospheric scientists), in conjunction with NASA, has generated scenarios describing the actions of EOSDIS subsystems in response to user requests and other system activities. As part of a research effort in verification and validation techniques, this paper describes our efforts to develop requirements tables from these scenarios for the EOSDIS Core System (ECS). The tables specify event-driven mode transitions based on techniques developed by the Naval Research Lab's (NRL) Software Cost Reduction (SCR) project. The SCR approach has proven effective in specifying requirements for large systems in an unambiguous, terse format that enhance identification of incomplete and inconsistent requirements. We describe development of SCR tables from user scenarios and identify the strengths and weaknesses of our approach in contrast to the requirements tracing approach. We also evaluate the capabilities of both approach to respond to the volatility of requirements in large, complex systems.

  17. Evaluation of the cognitive effects of travel technique in complex real and virtual environments.

    PubMed

    Suma, Evan A; Finkelstein, Samantha L; Reid, Myra; V Babu, Sabarish; Ulinski, Amy C; Hodges, Larry F

    2010-01-01

    We report a series of experiments conducted to investigate the effects of travel technique on information gathering and cognition in complex virtual environments. In the first experiment, participants completed a non-branching multilevel 3D maze at their own pace using either real walking or one of two virtual travel techniques. In the second experiment, we constructed a real-world maze with branching pathways and modeled an identical virtual environment. Participants explored either the real or virtual maze for a predetermined amount of time using real walking or a virtual travel technique. Our results across experiments suggest that for complex environments requiring a large number of turns, virtual travel is an acceptable substitute for real walking if the goal of the application involves learning or reasoning based on information presented in the virtual world. However, for applications that require fast, efficient navigation or travel that closely resembles real-world behavior, real walking has advantages over common joystick-based virtual travel techniques.

  18. Infrastructure stability surveillance with high resolution InSAR

    NASA Astrophysics Data System (ADS)

    Balz, Timo; Düring, Ralf

    2017-02-01

    The construction of new infrastructure in largely unknown and difficult environments, as it is necessary for the construction of the New Silk Road, can lead to a decreased stability along the construction site, leading to an increase in landslide risk and deformation caused by surface motion. This generally requires a thorough pre-analysis and consecutive surveillance of the deformation patterns to ensure the stability and safety of the infrastructure projects. Interferometric SAR (InSAR) and the derived techniques of multi-baseline InSAR are very powerful tools for a large area observation of surface deformation patterns. With InSAR and deriver techniques, the topographic height and the surface motion can be estimated for large areas, making it an ideal tool for supporting the planning, construction, and safety surveillance of new infrastructure elements in remote areas.

  19. Selection of actuator locations for static shape control of large space structures by heuristic integer programing

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.; Adelman, H. M.

    1984-01-01

    Orbiting spacecraft such as large space antennas have to maintain a highly accurate space to operate satisfactorily. Such structures require active and passive controls to mantain an accurate shape under a variety of disturbances. Methods for the optimum placement of control actuators for correcting static deformations are described. In particular, attention is focused on the case were control locations have to be selected from a large set of available sites, so that integer programing methods are called for. The effectiveness of three heuristic techniques for obtaining a near-optimal site selection is compared. In addition, efficient reanalysis techniques for the rapid assessment of control effectiveness are presented. Two examples are used to demonstrate the methods: a simple beam structure and a 55m space-truss-parabolic antenna.

  20. Fabrication of near-net shape graphite/magnesium composites for large mirrors

    NASA Astrophysics Data System (ADS)

    Wendt, Robert; Misra, Mohan

    1990-10-01

    Successful development of space-based surveillance and laser systems will require large precision mirrors which are dimensionally stable under thermal, static, and dynamic (i.e., structural vibrations and retargeting) loading conditions. Among the advanced composites under consideration for large space mirrors, graphite fiber reinforced magnesium (Gr/Mg) is an ideal candidate material that can be tailored to obtain an optimum combination of properties, including a high modulus of elasticity, zero coefficient of thermal expansion, low density, and high thermal conductivity. In addition, an innovative technique, combining conventional filament winding and vacuum casting has been developed to produce near-net shape Gr/Mg composites. This approach can significantly reduce the cost of fabricating large mirrors by decreasing required machining. However, since Gr/Mg cannot be polished to a reflective surface, plating is required. This paper will review research at Martin Marietta Astronautics Group on Gr/Mg mirror blank fabrication and measured mechanical and thermal properties. Also, copper plating and polishing methods, and optical surface characteristics will be presented.

  1. Communication architecture for large geostationary platforms

    NASA Technical Reports Server (NTRS)

    Bond, F. E.

    1979-01-01

    Large platforms have been proposed for supporting multipurpose communication payloads to exploit economy of scale, reduce congestion in the geostationary orbit, provide interconnectivity between diverse earth stations, and obtain significant frequency reuse with large multibeam antennas. This paper addresses a specific system design, starting with traffic projections in the next two decades and discussing tradeoffs and design approaches for major components including: antennas, transponders, and switches. Other issues explored are selection of frequency bands, modulation, multiple access, switching methods, and techniques for servicing areas with nonuniform traffic demands. Three-major services are considered: a high-volume trunking system, a direct-to-user system, and a broadcast system for video distribution and similar functions. Estimates of payload weight and d.c. power requirements are presented. Other subjects treated are: considerations of equipment layout for servicing by an orbit transfer vehicle, mechanical stability requirements for the large antennas, and reliability aspects of the large number of transponders employed.

  2. An intermediate level of abstraction for computational systems chemistry.

    PubMed

    Andersen, Jakob L; Flamm, Christoph; Merkle, Daniel; Stadler, Peter F

    2017-12-28

    Computational techniques are required for narrowing down the vast space of possibilities to plausible prebiotic scenarios, because precise information on the molecular composition, the dominant reaction chemistry and the conditions for that era are scarce. The exploration of large chemical reaction networks is a central aspect in this endeavour. While quantum chemical methods can accurately predict the structures and reactivities of small molecules, they are not efficient enough to cope with large-scale reaction systems. The formalization of chemical reactions as graph grammars provides a generative system, well grounded in category theory, at the right level of abstraction for the analysis of large and complex reaction networks. An extension of the basic formalism into the realm of integer hyperflows allows for the identification of complex reaction patterns, such as autocatalysis, in large reaction networks using optimization techniques.This article is part of the themed issue 'Reconceptualizing the origins of life'. © 2017 The Author(s).

  3. Adding control to arbitrary unknown quantum operations

    PubMed Central

    Zhou, Xiao-Qi; Ralph, Timothy C.; Kalasuwan, Pruet; Zhang, Mian; Peruzzo, Alberto; Lanyon, Benjamin P.; O'Brien, Jeremy L.

    2011-01-01

    Although quantum computers promise significant advantages, the complexity of quantum algorithms remains a major technological obstacle. We have developed and demonstrated an architecture-independent technique that simplifies adding control qubits to arbitrary quantum operations—a requirement in many quantum algorithms, simulations and metrology. The technique, which is independent of how the operation is done, does not require knowledge of what the operation is, and largely separates the problems of how to implement a quantum operation in the laboratory and how to add a control. Here, we demonstrate an entanglement-based version in a photonic system, realizing a range of different two-qubit gates with high fidelity. PMID:21811242

  4. Analysis of scattering behavior and radar penetration in AIRSAR data

    NASA Technical Reports Server (NTRS)

    Rignot, Eric; Van Zyl, Jakob

    1992-01-01

    A technique is presented to physically characterize changes in radar backscatter with frequency in multifrequency single polarization radar images that can be used as a first step in the analysis of the data and the retrieval of geophysical parameters. The technique is automatic, relatively independent of the incidence angle, and only requires a good calibration accuracy between the different frequencies. The technique reveals large areas where scattering changes significantly with frequency and whether the surface has the characteristics of a smooth, slightly rough, rough, or very rough surface.

  5. Evaluation of a rapid diagnostic field test kit for identification of Phytophthora ramorum, P. kernoviae and other Phytophthora species at the point of inspection

    Treesearch

    C.R. Lane; E. Hobden; L. Laurenson; V.C. Barton; K.J.D. Hughes; H. Swan; N. Boonham; A.J. Inman

    2008-01-01

    Plant health regulations to prevent the introduction and spread of Phytophthora ramorum and P. kernoviae require rapid, cost effective diagnostic methods for screening large numbers of plant samples at the time of inspection. Current on-site techniques require expensive equipment, considerable expertise and are not suited for plant...

  6. Antimicrobial residues in animal waste and water resources proximal to large-scale swine and poultry feeding operations

    USGS Publications Warehouse

    Campagnolo, E.R.; Johnson, K.R.; Karpati, A.; Rubin, C.S.; Kolpin, D.W.; Meyer, M.T.; Esteban, J. Emilio; Currier, R.W.; Smith, K.; Thu, K.M.; McGeehin, M.

    2002-01-01

    Expansion and intensification of large-scale animal feeding operations (AFOs) in the United States has resulted in concern about environmental contamination and its potential public health impacts. The objective of this investigation was to obtain background data on a broad profile of antimicrobial residues in animal wastes and surface water and groundwater proximal to large-scale swine and poultry operations. The samples were measured for antimicrobial compounds using both radioimmunoassay and liquid chromatography/electrospray ionization-mass spectrometry (LC/ESI-MS) techniques. Multiple classes of antimicrobial compounds (commonly at concentrations of >100 μg/l) were detected in swine waste storage lagoons. In addition, multiple classes of antimicrobial compounds were detected in surface and groundwater samples collected proximal to the swine and poultry farms. This information indicates that animal waste used as fertilizer for crops may serve as a source of antimicrobial residues for the environment. Further research is required to determine if the levels of antimicrobials detected in this study are of consequence to human and/or environmental ecosystems. A comparison of the radioimmunoassay and LC/ESI-MS analytical methods documented that radioimmunoassay techniques were only appropriate for measuring residues in animal waste samples likely to contain high levels of antimicrobials. More sensitive LC/ESI-MS techniques are required in environmental samples, where low levels of antimicrobial residues are more likely.

  7. Liposuction: Anaesthesia challenges

    PubMed Central

    Sood, Jayashree; Jayaraman, Lakshmi; Sethi, Nitin

    2011-01-01

    Liposuction is one of the most popular treatment modalities in aesthetic surgery with certain unique anaesthetic considerations. Liposuction is often performed as an office procedure. There are four main types of liposuction techniques based on the volume of infiltration or wetting solution injected, viz dry, wet, superwet, and tumescent technique. The tumescent technique is one of the most common liposuction techniques in which large volumes of dilute local anaesthetic (wetting solution) are injected into the fat to facilitate anaesthesia and decrease blood loss. The amount of lignocaine injected may be very large, approximately 35-55 mg/kg, raising concerns regarding local anaesthetic toxicity. Liposuction can be of two types according to the volume of solution aspirated: High volume (>4,000 ml aspirated) or low volume (<4,000 ml aspirated). While small volume liposuction may be done under local/monitored anaesthesia care, large-volume liposuction requires general anaesthesia. As a large volume of wetting solution is injected into the subcutaneous tissue, the intraoperative fluid management has to be carefully titrated along with haemodynamic monitoring and temperature control. Assessment of blood loss is difficult, as it is mixed with the aspirated fat. Since most obese patients opt for liposuction as a quick method to lose weight, all concerns related to obesity need to be addressed in a preoperative evaluation. PMID:21808392

  8. Visible near-diffraction-limited lucky imaging with full-sky laser-assisted adaptive optics

    NASA Astrophysics Data System (ADS)

    Basden, A. G.

    2014-08-01

    Both lucky imaging techniques and adaptive optics require natural guide stars, limiting sky-coverage, even when laser guide stars are used. Lucky imaging techniques become less successful on larger telescopes unless adaptive optics is used, as the fraction of images obtained with well-behaved turbulence across the whole telescope pupil becomes vanishingly small. Here, we introduce a technique combining lucky imaging techniques with tomographic laser guide star adaptive optics systems on large telescopes. This technique does not require any natural guide star for the adaptive optics, and hence offers full sky-coverage adaptive optics correction. In addition, we introduce a new method for lucky image selection based on residual wavefront phase measurements from the adaptive optics wavefront sensors. We perform Monte Carlo modelling of this technique, and demonstrate I-band Strehl ratios of up to 35 per cent in 0.7 arcsec mean seeing conditions with 0.5 m deformable mirror pitch and full adaptive optics sky-coverage. We show that this technique is suitable for use with lucky imaging reference stars as faint as magnitude 18, and fainter if more advanced image selection and centring techniques are used.

  9. Analysis of a crossed Bragg cell acousto-optical spectrometer for SETI

    NASA Technical Reports Server (NTRS)

    Gulkis, S.

    1989-01-01

    The search for radio signals from extraterrestrial intelligent beings (SETI) requires the use of large instantaneous bandwidth (500 MHz) and high resolution (20 Hz) spectrometers. Digital systems with a high degree of modularity can be used to provide this capability, and this method has been widely discussed. Another technique for meeting the SETI requirement is to use a crossed Bragg cell spectrometer as described by Psaltis and Casasent. This technique makes use of the Folded Spectrum concept, introduced by Thomas. The Folded Spectrum is a 2-D Fourier Transform of a raster scanned 1-D signal. It is directly related to the long 1-D spectrum of the original signal and is ideally suited for optical signal processing. The folded spectrum technique has received little attention to date, primarily because early systems made use of photographic film which are unsuitable for the real time data analysis and voluminous data requirements of SETI. An analysis of the crossed Bragg cell spectrometer is presented as a method to achieve the spectral processing requirements for SETI. Systematic noise contributions unique to the Bragg cell system will be discussed.

  10. Analysis of a crossed Bragg cell acousto-optical spectrometer for SETI.

    PubMed

    Gulkis, S

    1989-01-01

    The search for radio signals from extraterrestrial intelligent beings (SETI) requires the use of large instantaneous bandwidth (500 MHz) and high resolution (20 Hz) spectrometers. Digital systems with a high degree of modularity can be used to provide this capability, and this method has been widely discussed. Another technique for meeting the SETI requirement is to use a crossed Bragg cell spectrometer as described by Psaltis and Casasent. This technique makes use of the Folded Spectrum concept, introduced by Thomas. The Folded Spectrum is a 2-D Fourier Transform of a raster scanned 1-D signal. It is directly related to the long 1-D spectrum of the original signal and is ideally suited for optical signal processing. The folded spectrum technique has received little attention to date, primarily because early systems made use of photographic film which are unsuitable for the real time data analysis and voluminous data requirements of SETI. An analysis of the crossed Bragg cell spectrometer is presented as a method to achieve the spectral processing requirements for SETI. Systematic noise contributions unique to the Bragg cell system will be discussed.

  11. Analysis of a crossed Bragg cell acousto-optical spectrometer for SETI

    NASA Astrophysics Data System (ADS)

    Gulkis, Samuel

    The search for radio signals from extraterrestrial intelligent beings (SETI) requires the use of large instantaneous bandwidth (500 MHz) and high resolution (20 Hz) spectrometers. Digital systems with a high degree of modularity can be used to provide this capability, and this method has been widely discussed. Another technique for meeting the SETI requirement is to use a crossed Bragg cell spectrometer as described by Psaltis and Casasent. This technique makes use of the Folded Spectrum concept, introduced by Thomas. The Folded Spectrum is a 2-D Fourier Transform of a raster scanned 1-D signal. It is directly related to the long 1-D spectrum of the original signal and is ideally suited for optical signal processing. The folded spectrum technique has received little attention to date, primarily because early systems made use of photographic film which are unsuitable for the real time data analysis and voluminous data requirements of SETI. An analysis of the crossed Bragg cell spectrometer is presented as a method to achieve the spectral processing requirements for SETI. Systematic noise contributions unique to the Bragg cell system will be discussed.

  12. Photogrammetric techniques for aerospace applications

    NASA Astrophysics Data System (ADS)

    Liu, Tianshu; Burner, Alpheus W.; Jones, Thomas W.; Barrows, Danny A.

    2012-10-01

    Photogrammetric techniques have been used for measuring the important physical quantities in both ground and flight testing including aeroelastic deformation, attitude, position, shape and dynamics of objects such as wind tunnel models, flight vehicles, rotating blades and large space structures. The distinct advantage of photogrammetric measurement is that it is a non-contact, global measurement technique. Although the general principles of photogrammetry are well known particularly in topographic and aerial survey, photogrammetric techniques require special adaptation for aerospace applications. This review provides a comprehensive and systematic summary of photogrammetric techniques for aerospace applications based on diverse sources. It is useful mainly for aerospace engineers who want to use photogrammetric techniques, but it also gives a general introduction for photogrammetrists and computer vision scientists to new applications.

  13. Coarse-Grain Bandwidth Estimation Techniques for Large-Scale Space Network

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Jennings, Esther

    2013-01-01

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-andforward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  14. Transparent electrode for optical switch

    DOEpatents

    Goldhar, J.; Henesian, M.A.

    1984-10-19

    The invention relates generally to optical switches and techniques for applying a voltage to an electro-optical crystal, and more particularly, to transparent electodes for an optical switch. System architectures for very large inertial confinement fusion (ICF) lasers require active optical elements with apertures on the order of one meter. Large aperture optical switches are needed for isolation of stages, switch-out from regenerative amplifier cavities and protection from target retroreflections.

  15. Wall Climbing Micro Ground Vehicle (MGV)

    DTIC Science & Technology

    2013-09-01

    magnetic attraction, (2) vacuum suction, (3) bio-mimetic techniques such as gecko pads, and (4) adhesion forces generated by aerodynamic principles, also...large attractive forces, but are limited to ferrous surfaces. Vacuum suction, such as in suction cups, also has the ability to create large adhesion...clean. Vortex adhesion does not require a perfect seal like vacuum suction and has the ability to travel over porous surfaces such as brick and

  16. Using artificial neural networks (ANN) for open-loop tomography

    NASA Astrophysics Data System (ADS)

    Osborn, James; De Cos Juez, Francisco Javier; Guzman, Dani; Butterley, Timothy; Myers, Richard; Guesalaga, Andres; Laine, Jesus

    2011-09-01

    The next generation of adaptive optics (AO) systems require tomographic techniques in order to correct for atmospheric turbulence along lines of sight separated from the guide stars. Multi-object adaptive optics (MOAO) is one such technique. Here, we present a method which uses an artificial neural network (ANN) to reconstruct the target phase given off-axis references sources. This method does not require any input of the turbulence profile and is therefore less susceptible to changing conditions than some existing methods. We compare our ANN method with a standard least squares type matrix multiplication method (MVM) in simulation and find that the tomographic error is similar to the MVM method. In changing conditions the tomographic error increases for MVM but remains constant with the ANN model and no large matrix inversions are required.

  17. An Eulerian time filtering technique to study large-scale transient flow phenomena

    NASA Astrophysics Data System (ADS)

    Vanierschot, Maarten; Persoons, Tim; van den Bulck, Eric

    2009-10-01

    Unsteady fluctuating velocity fields can contain large-scale periodic motions with frequencies well separated from those of turbulence. Examples are the wake behind a cylinder or the processing vortex core in a swirling jet. These turbulent flow fields contain large-scale, low-frequency oscillations, which are obscured by turbulence, making it impossible to identify them. In this paper, we present an Eulerian time filtering (ETF) technique to extract the large-scale motions from unsteady statistical non-stationary velocity fields or flow fields with multiple phenomena that have sufficiently separated spectral content. The ETF method is based on non-causal time filtering of the velocity records in each point of the flow field. It is shown that the ETF technique gives good results, similar to the ones obtained by the phase-averaging method. In this paper, not only the influence of the temporal filter is checked, but also parameters such as the cut-off frequency and sampling frequency of the data are investigated. The technique is validated on a selected set of time-resolved stereoscopic particle image velocimetry measurements such as the initial region of an annular jet and the transition between flow patterns in an annular jet. The major advantage of the ETF method in the extraction of large scales is that it is computationally less expensive and it requires less measurement time compared to other extraction methods. Therefore, the technique is suitable in the startup phase of an experiment or in a measurement campaign where several experiments are needed such as parametric studies.

  18. [The requirements of standard and conditions of interchangeability of medical articles].

    PubMed

    Men'shikov, V V; Lukicheva, T I

    2013-11-01

    The article deals with possibility to apply specific approaches under evaluation of interchangeability of medical articles for laboratory analysis. The development of standardized analytical technologies of laboratory medicine and formulation of requirements of standards addressed to manufacturers of medical articles the clinically validated requirements are to be followed. These requirements include sensitivity and specificity of techniques, accuracy and precision of research results, stability of reagents' quality in particular conditions of their transportation and storage. The validity of requirements formulated in standards and addressed to manufacturers of medical articles can be proved using reference system, which includes master forms and standard samples, reference techniques and reference laboratories. This approach is supported by data of evaluation of testing systems for measurement of level of thyrotrophic hormone, thyroid hormones and glycated hemoglobin HB A1c. The versions of testing systems can be considered as interchangeable only in case of results corresponding to the results of reference technique and comparable with them. In case of absence of functioning reference system the possibilities of the Joined committee of traceability in laboratory medicine make it possible for manufacturers of reagent sets to apply the certified reference materials under development of manufacturing of sets for large listing of analytes.

  19. Large Advanced Space Systems (LASS) computer-aided design program additions

    NASA Technical Reports Server (NTRS)

    Farrell, C. E.

    1982-01-01

    The LSS preliminary and conceptual design requires extensive iteractive analysis because of the effects of structural, thermal, and control intercoupling. A computer aided design program that will permit integrating and interfacing of required large space system (LSS) analyses is discussed. The primary objective of this program is the implementation of modeling techniques and analysis algorithms that permit interactive design and tradeoff studies of LSS concepts. Eight software modules were added to the program. The existing rigid body controls module was modified to include solar pressure effects. The new model generator modules and appendage synthesizer module are integrated (interfaced) to permit interactive definition and generation of LSS concepts. The mass properties module permits interactive specification of discrete masses and their locations. The other modules permit interactive analysis of orbital transfer requirements, antenna primary beam n, and attitude control requirements.

  20. Beamforming array techniques for acoustic emission monitoring of large concrete structures

    NASA Astrophysics Data System (ADS)

    McLaskey, Gregory C.; Glaser, Steven D.; Grosse, Christian U.

    2010-06-01

    This paper introduces a novel method of acoustic emission (AE) analysis which is particularly suited for field applications on large plate-like reinforced concrete structures, such as walls and bridge decks. Similar to phased-array signal processing techniques developed for other non-destructive evaluation methods, this technique adapts beamforming tools developed for passive sonar and seismological applications for use in AE source localization and signal discrimination analyses. Instead of relying on the relatively weak P-wave, this method uses the energy-rich Rayleigh wave and requires only a small array of 4-8 sensors. Tests on an in-service reinforced concrete structure demonstrate that the azimuth of an artificial AE source can be determined via this method for sources located up to 3.8 m from the sensor array, even when the P-wave is undetectable. The beamforming array geometry also allows additional signal processing tools to be implemented, such as the VESPA process (VElocity SPectral Analysis), whereby the arrivals of different wave phases are identified by their apparent velocity of propagation. Beamforming AE can reduce sampling rate and time synchronization requirements between spatially distant sensors which in turn facilitates the use of wireless sensor networks for this application.

  1. Development of polymer nano composite patterns using fused deposition modeling for rapid investment casting process

    NASA Astrophysics Data System (ADS)

    Vivek, Tiwary; Arunkumar, P.; Deshpande, A. S.; Vinayak, Malik; Kulkarni, R. M.; Asif, Angadi

    2018-04-01

    Conventional investment casting is one of the oldest and most economical manufacturing techniques to produce intricate and complex part geometries. However, investment casting is considered economical only if the volume of production is large. Design iterations and design optimisations in this technique proves to be very costly due to time and tooling cost for making dies for producing wax patterns. However, with the advent of Additive manufacturing technology, plastic patterns promise a very good potential to replace the wax patterns. This approach can be very useful for low volume production & lab requirements, since the cost and time required to incorporate the changes in the design is very low. This research paper discusses the steps involved for developing polymer nanocomposite filaments and checking its suitability for investment castings. The process parameters of the 3D printer machine are also optimized using the DOE technique to obtain mechanically stronger plastic patterns. The study is done to develop a framework for rapid investment casting for lab as well as industrial requirements.

  2. A method of hidden Markov model optimization for use with geophysical data sets

    NASA Technical Reports Server (NTRS)

    Granat, R. A.

    2003-01-01

    Geophysics research has been faced with a growing need for automated techniques with which to process large quantities of data. A successful tool must meet a number of requirements: it should be consistent, require minimal parameter tuning, and produce scientifically meaningful results in reasonable time. We introduce a hidden Markov model (HMM)-based method for analysis of geophysical data sets that attempts to address these issues.

  3. Evaluating waste printed circuit boards recycling: Opportunities and challenges, a mini review.

    PubMed

    Awasthi, Abhishek Kumar; Zlamparet, Gabriel Ionut; Zeng, Xianlai; Li, Jinhui

    2017-04-01

    Rapid generation of waste printed circuit boards has become a very serious issue worldwide. Numerous techniques have been developed in the last decade to resolve the pollution from waste printed circuit boards, and also recover valuable metals from the waste printed circuit boards stream on a large-scale. However, these techniques have their own certain specific drawbacks that need to be rectified properly. In this review article, these recycling technologies are evaluated based on a strength, weaknesses, opportunities and threats analysis. Furthermore, it is warranted that, the substantial research is required to improve the current technologies for waste printed circuit boards recycling in the outlook of large-scale applications.

  4. Applications of multiple-constraint matrix updates to the optimal control of large structures

    NASA Technical Reports Server (NTRS)

    Smith, S. W.; Walcott, B. L.

    1992-01-01

    Low-authority control or vibration suppression in large, flexible space structures can be formulated as a linear feedback control problem requiring computation of displacement and velocity feedback gain matrices. To ensure stability in the uncontrolled modes, these gain matrices must be symmetric and positive definite. In this paper, efficient computation of symmetric, positive-definite feedback gain matrices is accomplished through the use of multiple-constraint matrix update techniques originally developed for structural identification applications. Two systems were used to illustrate the application: a simple spring-mass system and a planar truss. From these demonstrations, use of this multiple-constraint technique is seen to provide a straightforward approach for computing the low-authority gains.

  5. A Fast Method for the Segmentation of Synaptic Junctions and Mitochondria in Serial Electron Microscopic Images of the Brain.

    PubMed

    Márquez Neila, Pablo; Baumela, Luis; González-Soriano, Juncal; Rodríguez, Jose-Rodrigo; DeFelipe, Javier; Merchán-Pérez, Ángel

    2016-04-01

    Recent electron microscopy (EM) imaging techniques permit the automatic acquisition of a large number of serial sections from brain samples. Manual segmentation of these images is tedious, time-consuming and requires a high degree of user expertise. Therefore, there is considerable interest in developing automatic segmentation methods. However, currently available methods are computationally demanding in terms of computer time and memory usage, and to work properly many of them require image stacks to be isotropic, that is, voxels must have the same size in the X, Y and Z axes. We present a method that works with anisotropic voxels and that is computationally efficient allowing the segmentation of large image stacks. Our approach involves anisotropy-aware regularization via conditional random field inference and surface smoothing techniques to improve the segmentation and visualization. We have focused on the segmentation of mitochondria and synaptic junctions in EM stacks from the cerebral cortex, and have compared the results to those obtained by other methods. Our method is faster than other methods with similar segmentation results. Our image regularization procedure introduces high-level knowledge about the structure of labels. We have also reduced memory requirements with the introduction of energy optimization in overlapping partitions, which permits the regularization of very large image stacks. Finally, the surface smoothing step improves the appearance of three-dimensional renderings of the segmented volumes.

  6. A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network

    NASA Astrophysics Data System (ADS)

    Li, Yiming; Bhanu, Bir

    Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.

  7. Image Alignment for Multiple Camera High Dynamic Range Microscopy.

    PubMed

    Eastwood, Brian S; Childs, Elisabeth C

    2012-01-09

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.

  8. Image Alignment for Multiple Camera High Dynamic Range Microscopy

    PubMed Central

    Eastwood, Brian S.; Childs, Elisabeth C.

    2012-01-01

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera. PMID:22545028

  9. Deriving Earth Science Data Analytics Requirements

    NASA Technical Reports Server (NTRS)

    Kempler, Steven J.

    2015-01-01

    Data Analytics applications have made successful strides in the business world where co-analyzing extremely large sets of independent variables have proven profitable. Today, most data analytics tools and techniques, sometimes applicable to Earth science, have targeted the business industry. In fact, the literature is nearly absent of discussion about Earth science data analytics. Earth science data analytics (ESDA) is the process of examining large amounts of data from a variety of sources to uncover hidden patterns, unknown correlations, and other useful information. ESDA is most often applied to data preparation, data reduction, and data analysis. Co-analysis of increasing number and volume of Earth science data has become more prevalent ushered by the plethora of Earth science data sources generated by US programs, international programs, field experiments, ground stations, and citizen scientists.Through work associated with the Earth Science Information Partners (ESIP) Federation, ESDA types have been defined in terms of data analytics end goals. Goals of which are very different than those in business, requiring different tools and techniques. A sampling of use cases have been collected and analyzed in terms of data analytics end goal types, volume, specialized processing, and other attributes. The goal of collecting these use cases is to be able to better understand and specify requirements for data analytics tools and techniques yet to be implemented. This presentation will describe the attributes and preliminary findings of ESDA use cases, as well as provide early analysis of data analytics toolstechniques requirements that would support specific ESDA type goals. Representative existing data analytics toolstechniques relevant to ESDA will also be addressed.

  10. Quantum rendering

    NASA Astrophysics Data System (ADS)

    Lanzagorta, Marco O.; Gomez, Richard B.; Uhlmann, Jeffrey K.

    2003-08-01

    In recent years, computer graphics has emerged as a critical component of the scientific and engineering process, and it is recognized as an important computer science research area. Computer graphics are extensively used for a variety of aerospace and defense training systems and by Hollywood's special effects companies. All these applications require the computer graphics systems to produce high quality renderings of extremely large data sets in short periods of time. Much research has been done in "classical computing" toward the development of efficient methods and techniques to reduce the rendering time required for large datasets. Quantum Computing's unique algorithmic features offer the possibility of speeding up some of the known rendering algorithms currently used in computer graphics. In this paper we discuss possible implementations of quantum rendering algorithms. In particular, we concentrate on the implementation of Grover's quantum search algorithm for Z-buffering, ray-tracing, radiosity, and scene management techniques. We also compare the theoretical performance between the classical and quantum versions of the algorithms.

  11. Simulation of FRET dyes allows quantitative comparison against experimental data

    NASA Astrophysics Data System (ADS)

    Reinartz, Ines; Sinner, Claude; Nettels, Daniel; Stucki-Buchli, Brigitte; Stockmar, Florian; Panek, Pawel T.; Jacob, Christoph R.; Nienhaus, Gerd Ulrich; Schuler, Benjamin; Schug, Alexander

    2018-03-01

    Fully understanding biomolecular function requires detailed insight into the systems' structural dynamics. Powerful experimental techniques such as single molecule Förster Resonance Energy Transfer (FRET) provide access to such dynamic information yet have to be carefully interpreted. Molecular simulations can complement these experiments but typically face limits in accessing slow time scales and large or unstructured systems. Here, we introduce a coarse-grained simulation technique that tackles these challenges. While requiring only few parameters, we maintain full protein flexibility and include all heavy atoms of proteins, linkers, and dyes. We are able to sufficiently reduce computational demands to simulate large or heterogeneous structural dynamics and ensembles on slow time scales found in, e.g., protein folding. The simulations allow for calculating FRET efficiencies which quantitatively agree with experimentally determined values. By providing atomically resolved trajectories, this work supports the planning and microscopic interpretation of experiments. Overall, these results highlight how simulations and experiments can complement each other leading to new insights into biomolecular dynamics and function.

  12. Mathematics and mallard management

    USGS Publications Warehouse

    Cowardin, L.M.; Johnson, D.H.

    1979-01-01

    Waterfowl managers can effectively use simple population models to aid in making management decisions. We present a basic model of the change in population size as related to survival and recruitment. A management technique designed to increase survival of mallards (Anas platyrhynchos) by limiting harvest on the Chippewa National Forest, Minnesota, is used to illustrate the application of models in decision making. The analysis suggests that the management technique would be of limited effectiveness. In a 2nd example, the change in mallard population in central North Dakota is related to implementing programs to create dense nesting cover with or without supplementary predator control. The analysis suggests that large tracts of land would be required to achieve a hypothetical management objective of increasing harvest by 50% while maintaining a stable population. Less land would be required if predator reduction were used in combination with cover management, but questions about effectiveness and ecological implications of large scale predator reduction remain unresolved. The use of models as a guide to planning research responsive to the needs of management is illustrated.

  13. Advances in Parallelization for Large Scale Oct-Tree Mesh Generation

    NASA Technical Reports Server (NTRS)

    O'Connell, Matthew; Karman, Steve L.

    2015-01-01

    Despite great advancements in the parallelization of numerical simulation codes over the last 20 years, it is still common to perform grid generation in serial. Generating large scale grids in serial often requires using special "grid generation" compute machines that can have more than ten times the memory of average machines. While some parallel mesh generation techniques have been proposed, generating very large meshes for LES or aeroacoustic simulations is still a challenging problem. An automated method for the parallel generation of very large scale off-body hierarchical meshes is presented here. This work enables large scale parallel generation of off-body meshes by using a novel combination of parallel grid generation techniques and a hybrid "top down" and "bottom up" oct-tree method. Meshes are generated using hardware commonly found in parallel compute clusters. The capability to generate very large meshes is demonstrated by the generation of off-body meshes surrounding complex aerospace geometries. Results are shown including a one billion cell mesh generated around a Predator Unmanned Aerial Vehicle geometry, which was generated on 64 processors in under 45 minutes.

  14. The optimisation, design and verification of feed horn structures for future Cosmic Microwave Background missions

    NASA Astrophysics Data System (ADS)

    McCarthy, Darragh; Trappe, Neil; Murphy, J. Anthony; O'Sullivan, Créidhe; Gradziel, Marcin; Doherty, Stephen; Huggard, Peter G.; Polegro, Arturo; van der Vorst, Maarten

    2016-05-01

    In order to investigate the origins of the Universe, it is necessary to carry out full sky surveys of the temperature and polarisation of the Cosmic Microwave Background (CMB) radiation, the remnant of the Big Bang. Missions such as COBE and Planck have previously mapped the CMB temperature, however in order to further constrain evolutionary and inflationary models, it is necessary to measure the polarisation of the CMB with greater accuracy and sensitivity than before. Missions undertaking such observations require large arrays of feed horn antennas to feed the detector arrays. Corrugated horns provide the best performance, however owing to the large number required (circa 5000 in the case of the proposed COrE+ mission), such horns are prohibitive in terms of thermal, mechanical and cost limitations. In this paper we consider the optimisation of an alternative smooth-walled piecewise conical profiled horn, using the mode-matching technique alongside a genetic algorithm. The technique is optimised to return a suitable design using efficient modelling software and standard desktop computing power. A design is presented showing a directional beam pattern and low levels of return loss, cross-polar power and sidelobes, as required by future CMB missions. This design is manufactured and the measured results compared with simulation, showing excellent agreement and meeting the required performance criteria. The optimisation process described here is robust and can be applied to many other applications where specific performance characteristics are required, with the user simply defining the beam requirements.

  15. Method and apparatus for phase for and amplitude detection

    DOEpatents

    Cernosek, Richard W.; Frye, Gregory C.; Martin, Stephen J.

    1998-06-09

    A new class of techniques been developed which allow inexpensive application of SAW-type chemical sensor devices while retaining high sensitivity (ppm) to chemical detection. The new techniques do not require that the sensor be part of an oscillatory circuit, allowing large concentrations of, e.g., chemical vapors in air, to be accurately measured without compromising the capacity to measure trace concentrations. Such devices have numerous potential applications in environmental monitoring, from manufacturing environments to environmental restoration.

  16. The value of job analysis, job description and performance.

    PubMed

    Wolfe, M N; Coggins, S

    1997-01-01

    All companies, regardless of size, are faced with the same employment concerns. Efficient personnel management requires the use of three human resource techniques--job analysis, job description and performance appraisal. These techniques and tools are not for large practices only. Small groups can obtain the same benefits by employing these performance control measures. Job analysis allows for the development of a compensation system. Job descriptions summarize the most important duties. Performance appraisals help reward outstanding work.

  17. Ray Tracing Methods in Seismic Emission Tomography

    NASA Astrophysics Data System (ADS)

    Chebotareva, I. Ya.

    2018-03-01

    Highly efficient approximate ray tracing techniques which can be used in seismic emission tomography and in other methods requiring a large number of raypaths are described. The techniques are applicable for the gradient and plane-layered velocity sections of the medium and for the models with a complicated geometry of contrasting boundaries. The empirical results obtained with the use of the discussed ray tracing technologies and seismic emission tomography results, as well as the results of numerical modeling, are presented.

  18. Overlay metrology for double patterning processes

    NASA Astrophysics Data System (ADS)

    Leray, Philippe; Cheng, Shaunee; Laidler, David; Kandel, Daniel; Adel, Mike; Dinu, Berta; Polli, Marco; Vasconi, Mauro; Salski, Bartlomiej

    2009-03-01

    The double patterning (DPT) process is foreseen by the industry to be the main solution for the 32 nm technology node and even beyond. Meanwhile process compatibility has to be maintained and the performance of overlay metrology has to improve. To achieve this for Image Based Overlay (IBO), usually the optics of overlay tools are improved. It was also demonstrated that these requirements are achievable with a Diffraction Based Overlay (DBO) technique named SCOLTM [1]. In addition, we believe that overlay measurements with respect to a reference grid are required to achieve the required overlay control [2]. This induces at least a three-fold increase in the number of measurements (2 for double patterned layers to the reference grid and 1 between the double patterned layers). The requirements of process compatibility, enhanced performance and large number of measurements make the choice of overlay metrology for DPT very challenging. In this work we use different flavors of the standard overlay metrology technique (IBO) as well as the new technique (SCOL) to address these three requirements. The compatibility of the corresponding overlay targets with double patterning processes (Litho-Etch-Litho-Etch (LELE); Litho-Freeze-Litho-Etch (LFLE), Spacer defined) is tested. The process impact on different target types is discussed (CD bias LELE, Contrast for LFLE). We compare the standard imaging overlay metrology with non-standard imaging techniques dedicated to double patterning processes (multilayer imaging targets allowing one overlay target instead of three, very small imaging targets). In addition to standard designs already discussed [1], we investigate SCOL target designs specific to double patterning processes. The feedback to the scanner is determined using the different techniques. The final overlay results obtained are compared accordingly. We conclude with the pros and cons of each technique and suggest the optimal metrology strategy for overlay control in double patterning processes.

  19. Three axis vector atomic magnetometer utilizing polarimetric technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pradhan, Swarupananda, E-mail: spradhan@barc.gov.in, E-mail: pradhans75@gmail.com

    2016-09-15

    The three axis vector magnetic field measurement based on the interaction of a single elliptically polarized light beam with an atomic system is described. The magnetic field direction dependent atomic responses are extracted by the polarimetric detection in combination with laser frequency modulation and magnetic field modulation techniques. The magnetometer geometry offers additional critical requirements like compact size and large dynamic range for space application. Further, the three axis magnetic field is measured using only the reflected signal (one polarization component) from the polarimeter and thus can be easily expanded to make spatial array of detectors and/or high sensitivity fieldmore » gradient measurement as required for biomedical application.« less

  20. Principles for system level electrochemistry

    NASA Technical Reports Server (NTRS)

    Thaller, L. H.

    1986-01-01

    The higher power and higher voltage levels anticipated for future space missions have required a careful review of the techniques currently in use to preclude battery problems that are related to the dispersion characteristics of the individual cells. Not only are the out-of-balance problems accentuated in these larger systems, but the thermal management considerations also require a greater degree of accurate design. Newer concepts which employ active cooling techniques are being developed which permit higher rates of discharge and tighter packing densities for the electrochemical components. This paper will put forward six semi-independent principles relating to battery systems. These principles will progressively address cell, battery and finally system related aspects of large electrochemical storage systems.

  1. Redefining genomic privacy: trust and empowerment.

    PubMed

    Erlich, Yaniv; Williams, James B; Glazer, David; Yocum, Kenneth; Farahany, Nita; Olson, Maynard; Narayanan, Arvind; Stein, Lincoln D; Witkowski, Jan A; Kain, Robert C

    2014-11-01

    Fulfilling the promise of the genetic revolution requires the analysis of large datasets containing information from thousands to millions of participants. However, sharing human genomic data requires protecting subjects from potential harm. Current models rely on de-identification techniques in which privacy versus data utility becomes a zero-sum game. Instead, we propose the use of trust-enabling techniques to create a solution in which researchers and participants both win. To do so we introduce three principles that facilitate trust in genetic research and outline one possible framework built upon those principles. Our hope is that such trust-centric frameworks provide a sustainable solution that reconciles genetic privacy with data sharing and facilitates genetic research.

  2. Redefining Genomic Privacy: Trust and Empowerment

    PubMed Central

    Erlich, Yaniv; Williams, James B.; Glazer, David; Yocum, Kenneth; Farahany, Nita; Olson, Maynard; Narayanan, Arvind; Stein, Lincoln D.; Witkowski, Jan A.; Kain, Robert C.

    2014-01-01

    Fulfilling the promise of the genetic revolution requires the analysis of large datasets containing information from thousands to millions of participants. However, sharing human genomic data requires protecting subjects from potential harm. Current models rely on de-identification techniques in which privacy versus data utility becomes a zero-sum game. Instead, we propose the use of trust-enabling techniques to create a solution in which researchers and participants both win. To do so we introduce three principles that facilitate trust in genetic research and outline one possible framework built upon those principles. Our hope is that such trust-centric frameworks provide a sustainable solution that reconciles genetic privacy with data sharing and facilitates genetic research. PMID:25369215

  3. A Fast MoM Solver (GIFFT) for Large Arrays of Microstrip and Cavity-Backed Antennas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fasenfest, B J; Capolino, F; Wilton, D

    2005-02-02

    A straightforward numerical analysis of large arrays of arbitrary contour (and possibly missing elements) requires large memory storage and long computation times. Several techniques are currently under development to reduce this cost. One such technique is the GIFFT (Green's function interpolation and FFT) method discussed here that belongs to the class of fast solvers for large structures. This method uses a modification of the standard AIM approach [1] that takes into account the reusability properties of matrices that arise from identical array elements. If the array consists of planar conducting bodies, the array elements are meshed using standard subdomain basismore » functions, such as the RWG basis. The Green's function is then projected onto a sparse regular grid of separable interpolating polynomials. This grid can then be used in a 2D or 3D FFT to accelerate the matrix-vector product used in an iterative solver [2]. The method has been proven to greatly reduce solve time by speeding up the matrix-vector product computation. The GIFFT approach also reduces fill time and memory requirements, since only the near element interactions need to be calculated exactly. The present work extends GIFFT to layered material Green's functions and multiregion interactions via slots in ground planes. In addition, a preconditioner is implemented to greatly reduce the number of iterations required for a solution. The general scheme of the GIFFT method is reported in [2]; this contribution is limited to presenting new results for array antennas made of slot-excited patches and cavity-backed patch antennas.« less

  4. Overview of Dynamic Test Techniques for Flight Dynamics Research at NASA LaRC (Invited)

    NASA Technical Reports Server (NTRS)

    Owens, D. Bruce; Brandon, Jay M.; Croom, Mark A.; Fremaux, C. Michael; Heim, Eugene H.; Vicroy, Dan D.

    2006-01-01

    An overview of dynamic test techniques used at NASA Langley Research Center on scale models to obtain a comprehensive flight dynamics characterization of aerospace vehicles is presented. Dynamic test techniques have been used at Langley Research Center since the 1920s. This paper will provide a partial overview of the current techniques available at Langley Research Center. The paper will discuss the dynamic scaling necessary to address the often hard-to-achieve similitude requirements for these techniques. Dynamic test techniques are categorized as captive, wind tunnel single degree-of-freedom and free-flying, and outside free-flying. The test facilities, technique specifications, data reduction, issues and future work are presented for each technique. The battery of tests conducted using the Blended Wing Body aircraft serves to illustrate how the techniques, when used together, are capable of characterizing the flight dynamics of a vehicle over a large range of critical flight conditions.

  5. Fabrication of sinterable silicon nitride by injection molding

    NASA Technical Reports Server (NTRS)

    Quackenbush, C. L.; French, K.; Neil, J. T.

    1982-01-01

    Transformation of structural ceramics from the laboratory to production requires development of near net shape fabrication techniques which minimize finish grinding. One potential technique for producing large quantities of complex-shaped parts at a low cost, and microstructure of sintered silicon nitride fabricated by injection molding is discussed and compared to data generated from isostatically dry-pressed material. Binder selection methodology, compounding of ceramic and binder components, injection molding techniques, and problems in binder removal are discussed. Strength, oxidation resistance, and microstructure of sintered silicon nitride fabricated by injection molding is discussed and compared to data generated from isostatically dry-pressed material.

  6. Optical smart packaging to reduce transmitted information.

    PubMed

    Cabezas, Luisa; Tebaldi, Myrian; Barrera, John Fredy; Bolognini, Néstor; Torroba, Roberto

    2012-01-02

    We demonstrate a smart image-packaging optical technique that uses what we believe is a new concept to save byte space when transmitting data. The technique supports a large set of images mapped into modulated speckle patterns. Then, they are multiplexed into a single package. This operation results in a substantial decreasing of the final amount of bytes of the package with respect to the amount resulting from the addition of the images without using the method. Besides, there are no requirements on the type of images to be processed. We present results that proof the potentiality of the technique.

  7. Development and verification of local/global analysis techniques for laminated composites

    NASA Technical Reports Server (NTRS)

    Griffin, O. Hayden, Jr.

    1989-01-01

    Analysis and design methods for laminated composite materials have been the subject of considerable research over the past 20 years, and are currently well developed. In performing the detailed three-dimensional analyses which are often required in proximity to discontinuities, however, analysts often encounter difficulties due to large models. Even with the current availability of powerful computers, models which are too large to run, either from a resource or time standpoint, are often required. There are several approaches which can permit such analyses, including substructuring, use of superelements or transition elements, and the global/local approach. This effort is based on the so-called zoom technique to global/local analysis, where a global analysis is run, with the results of that analysis applied to a smaller region as boundary conditions, in as many iterations as is required to attain an analysis of the desired region. Before beginning the global/local analyses, it was necessary to evaluate the accuracy of the three-dimensional elements currently implemented in the Computational Structural Mechanics (CSM) Testbed. It was also desired to install, using the Experimental Element Capability, a number of displacement formulation elements which have well known behavior when used for analysis of laminated composites.

  8. Five years' experience of the modified Meek technique in the management of extensive burns.

    PubMed

    Hsieh, Chun-Sheng; Schuong, Jen-Yu; Huang, W S; Huang, Ted T

    2008-05-01

    The Meek technique of skin expansion is useful for covering a large open wound with a small piece of skin graft, but requires a carefully followed protocol. Over the past 5 years, a skin graft expansion technique following the Meek principle was used to treat 37 individuals who had sustained third degree burns involving more than 40% of the body surface. A scheme was devised whereby the body was divided into six areas, in order to clarify the optimal order of wound debridements and skin grafting procedures as well as the regimen of aftercare. The mean body surface involvement was 72.9% and the mean area of third degree burns was 41%. The average number of operations required was 1.84. There were four deaths among in this group of patients. The Meek technique of skin expansion and the suggested protocol are together efficient and effective in covering an open wound, particularly where there is a paucity of skin graft donor sites.

  9. Electrical characterization of a Mapham inverter using pulse testing techniques

    NASA Technical Reports Server (NTRS)

    Baumann, E. D.; Myers, I. T.; Hammond, A. N.

    1990-01-01

    Electric power requirements for aerospace missions have reached megawatt power levels. Within the next few decades, it is anticipated that a manned lunar base, interplanetary travel, and surface exploration of the Martian surface will become reality. Several research and development projects aimed at demonstrating megawatt power level converters for space applications are currently underway at the NASA Lewis Research Center. Innovative testing techniques will be required to evaluate the components and converters, when developed, at their rated power in the absence of costly power sources, loads, and cooling systems. Facilities capable of testing these components and systems at full power are available, but their use may be cost prohibitive. The use of a multiple pulse testing technique is proposed to determine the electrical characteristics of large megawatt level power systems. Characterization of a Mapham inverter is made using the proposed technique and conclusions are drawn concerning its suitability as an experimental tool to evaluate megawatt level power systems.

  10. Ecological Effects of Weather Modification: A Problem Analysis.

    ERIC Educational Resources Information Center

    Cooper, Charles F.; Jolly, William C.

    This publication reviews the potential hazards to the environment of weather modification techniques as they eventually become capable of producing large scale weather pattern modifications. Such weather modifications could result in ecological changes which would generally require several years to be fully evident, including the alteration of…

  11. Lighter than air: A look at the past, a look at the possibilities

    NASA Technical Reports Server (NTRS)

    Shea, W. F.

    1975-01-01

    A brief history of the flight by LTA including the development of the zeppelin is presented. Safety and economy are discussed along with power requirements and production techniques. The problem of ground handling facilities for very large airships are briefly mentioned.

  12. Setting up a Rayleigh Scattering Based Flow Measuring System in a Large Nozzle Testing Facility

    NASA Technical Reports Server (NTRS)

    Panda, Jayanta; Gomez, Carlos R.

    2002-01-01

    A molecular Rayleigh scattering based air density measurement system has been built in a large nozzle testing facility at NASA Glenn Research Center. The technique depends on the light scattering by gas molecules present in air; no artificial seeding is required. Light from a single mode, continuous wave laser was transmitted to the nozzle facility by optical fiber, and light scattered by gas molecules, at various points along the laser beam, is collected and measured by photon-counting electronics. By placing the laser beam and collection optics on synchronized traversing units, the point measurement technique is made effective for surveying density variation over a cross-section of the nozzle plume. Various difficulties associated with dust particles, stray light, high noise level and vibration are discussed. Finally, a limited amount of data from an underexpanded jet are presented and compared with expected variations to validate the technique.

  13. Colonization of bone matrices by cellular components

    NASA Astrophysics Data System (ADS)

    Shchelkunova, E. I.; Voropaeva, A. A.; Korel, A. V.; Mayer, D. A.; Podorognaya, V. T.; Kirilova, I. A.

    2017-09-01

    Practical surgery, traumatology, orthopedics, and oncology require bioengineered constructs suitable for replacement of large-area bone defects. Only rigid/elastic matrix containing recipient's bone cells capable of mitosis, differentiation, and synthesizing extracellular matrix that supports cell viability can comply with these requirements. Therefore, the development of the techniques to produce structural and functional substitutes, whose three-dimensional structure corresponds to the recipient's damaged tissues, is the main objective of tissue engineering. This is achieved by developing tissue-engineering constructs represented by cells placed on the matrices. Low effectiveness of carrier matrix colonization with cells and their uneven distribution is one of the major problems in cell culture on various matrixes. In vitro studies of the interactions between cells and material, as well as the development of new techniques for scaffold colonization by cellular components are required to solve this problem.

  14. Trends in the Surgical Correction of Gynecomastia.

    PubMed

    Brown, Rodger H; Chang, Daniel K; Siy, Richard; Friedman, Jeffrey

    2015-05-01

    Gynecomastia refers to the enlargement of the male breast due to a proliferation of ductal, stromal, and/or fatty tissue. Although it is a common condition affecting up to 65% of men, not all cases require surgical intervention. Contemporary surgical techniques in the treatment of gynecomastia have become increasingly less invasive with the advent of liposuction and its variants, including power-assisted and ultrasound-assisted liposuction. These techniques, however, have been largely limited in their inability to address significant skin excess and ptosis. For mild to moderate gynecomastia, newer techniques using arthroscopic morcellation and endoscopic techniques promise to address the fibrous component, while minimizing scar burden by utilizing liposuction incisions. Nevertheless, direct excision through periareolar incisions remains a mainstay in treatment algorithms for its simplicity and avoidance of additional instrumentation. This is particularly true for more severe cases of gynecomastia requiring skin resection. In the most severe cases with significant skin redundancy and ptosis, breast amputation with free nipple grafting remains an effective option. Surgical treatment should be individualized to each patient, combining techniques to provide adequate resection and optimize aesthetic results.

  15. Trends in the Surgical Correction of Gynecomastia

    PubMed Central

    Brown, Rodger H.; Chang, Daniel K.; Siy, Richard; Friedman, Jeffrey

    2015-01-01

    Gynecomastia refers to the enlargement of the male breast due to a proliferation of ductal, stromal, and/or fatty tissue. Although it is a common condition affecting up to 65% of men, not all cases require surgical intervention. Contemporary surgical techniques in the treatment of gynecomastia have become increasingly less invasive with the advent of liposuction and its variants, including power-assisted and ultrasound-assisted liposuction. These techniques, however, have been largely limited in their inability to address significant skin excess and ptosis. For mild to moderate gynecomastia, newer techniques using arthroscopic morcellation and endoscopic techniques promise to address the fibrous component, while minimizing scar burden by utilizing liposuction incisions. Nevertheless, direct excision through periareolar incisions remains a mainstay in treatment algorithms for its simplicity and avoidance of additional instrumentation. This is particularly true for more severe cases of gynecomastia requiring skin resection. In the most severe cases with significant skin redundancy and ptosis, breast amputation with free nipple grafting remains an effective option. Surgical treatment should be individualized to each patient, combining techniques to provide adequate resection and optimize aesthetic results. PMID:26528088

  16. Control of flexible structures

    NASA Technical Reports Server (NTRS)

    Russell, R. A.

    1985-01-01

    The requirements for future space missions indicate that many of these spacecraft will be large, flexible, and in some applications, require precision geometries. A technology program that addresses the issues associated with the structure/control interactions for these classes of spacecraft is discussed. The goal of the NASA control of flexible structures technology program is to generate a technology data base that will provide the designer with options and approaches to achieve spacecraft performance such as maintaining geometry and/or suppressing undesired spacecraft dynamics. This technology program will define the appropriate combination of analysis, ground testing, and flight testing required to validate the structural/controls analysis and design tools. This work was motivated by a recognition that large minimum weight space structures will be required for many future missions. The tools necessary to support such design included: (1) improved structural analysis; (2) modern control theory; (3) advanced modeling techniques; (4) system identification; and (5) the integration of structures and controls.

  17. Microearthquake Studies at the Salton Sea Geothermal Field

    DOE Data Explorer

    Templeton, Dennise

    2013-10-01

    The objective of this project is to detect and locate microearthquakes to aid in the characterization of reservoir fracture networks. Accurate identification and mapping of the large numbers of microearthquakes induced in EGS is one technique that provides diagnostic information when determining the location, orientation and length of underground crack systems for use in reservoir development and management applications. Conventional earthquake location techniques often are employed to locate microearthquakes. However, these techniques require labor-intensive picking of individual seismic phase onsets across a network of sensors. For this project we adapt the Matched Field Processing (MFP) technique to the elastic propagation problem in geothermal reservoirs to identify more and smaller events than traditional methods alone.

  18. Bacteriophage vehicles for phage display: biology, mechanism, and application.

    PubMed

    Ebrahimizadeh, Walead; Rajabibazl, Masoumeh

    2014-08-01

    The phage display technique is a powerful tool for selection of various biological agents. This technique allows construction of large libraries from the antibody repertoire of different hosts and provides a fast and high-throughput selection method. Specific antibodies can be isolated based on distinctive characteristics from a library consisting of millions of members. These features made phage display technology preferred method for antibody selection and engineering. There are several phage display methods available and each has its unique merits and application. Selection of appropriate display technique requires basic knowledge of available methods and their mechanism. In this review, we describe different phage display techniques, available bacteriophage vehicles, and their mechanism.

  19. Extended maxillotomy for skull base access in contemporary management of chordomas: Rationale and technical aspect.

    PubMed

    Abdul Jalil, Muhammad Fahmi; Story, Rowan D; Rogers, Myron

    2017-05-01

    Minimally invasive approaches to the central skull base have been popularized over the last decade and have to a large extent displaced 'open' procedures. However, traditional skull base surgery still has its role especially when dealing with a large clival chordoma where maximal surgical resection is the principal goal to maximize patient survival. In this paper, we present a case of a 25year-old male patient with chordoma in the inferior clivus which was initially debulked via a transnasal endoscopic approach. He unfortunately had a large recurrence of tumor requiring re-do resection. With the aim to achieve maximal surgical resection, we then chose the technique of a transoral approach with Le Fort 1 maxillotomy and midline palatal split. Post-operative course for the patient was uneventful and post-operative MRI confirmed significant debulking of the clival lesion. The technique employed for the surgical procedure is presented here in detail as is our experience over two decades using this technique for tumors, inflammatory lesions and congenital abnormalities at the cranio-cervical junction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Large-area formation of self-aligned crystalline domains of organic semiconductors on transistor channels using CONNECT

    PubMed Central

    Park, Steve; Giri, Gaurav; Shaw, Leo; Pitner, Gregory; Ha, Jewook; Koo, Ja Hoon; Gu, Xiaodan; Park, Joonsuk; Lee, Tae Hoon; Nam, Ji Hyun; Hong, Yongtaek; Bao, Zhenan

    2015-01-01

    The electronic properties of solution-processable small-molecule organic semiconductors (OSCs) have rapidly improved in recent years, rendering them highly promising for various low-cost large-area electronic applications. However, practical applications of organic electronics require patterned and precisely registered OSC films within the transistor channel region with uniform electrical properties over a large area, a task that remains a significant challenge. Here, we present a technique termed “controlled OSC nucleation and extension for circuits” (CONNECT), which uses differential surface energy and solution shearing to simultaneously generate patterned and precisely registered OSC thin films within the channel region and with aligned crystalline domains, resulting in low device-to-device variability. We have fabricated transistor density as high as 840 dpi, with a yield of 99%. We have successfully built various logic gates and a 2-bit half-adder circuit, demonstrating the practical applicability of our technique for large-scale circuit fabrication. PMID:25902502

  1. A historical perspective on ventilator management.

    PubMed

    Shapiro, B A

    1994-02-01

    Paralysis via neuromuscular blockade in ICU patients requires mechanical ventilation. This review historically addresses the technological advances and scientific information upon which ventilatory management concepts are based, with special emphasis on the influence such concepts have had on the use of neuromuscular blocking agents. Specific reference is made to the scientific information and technological advances leading to the newer concepts of ventilatory management. Information from > 100 major studies in the peer-reviewed medical literature, along with the author's 25 yrs of clinical experience and academic involvement in acute respiratory care is presented. Nomenclature related to ventilatory management is specifically defined and consistently utilized to present and interpret the data. Pre-1970 ventilatory management is traced from the clinically unacceptable pressure-limited devices to the reliable performance of volume-limited ventilators. The scientific data and rationale that led to the concept of relatively large tidal volume delivery are reviewed in the light of today's concerns regarding alveolar overdistention, control-mode dyssynchrony, and auto-positive end-expiratory pressure. Also presented are the post-1970 scientific rationales for continuous positive airway pressure/positive end-expiratory pressure therapy, avoidance of alveolar hyperxia, and partial ventilatory support techniques (intermittent mandatory ventilation/synchronized intermittent mandatory ventilation). The development of pressure-support devices is discussed and the capability of pressure-control techniques is presented. The rationale for more recent concepts of total ventilatory support to avoid ventilator-induced lung injury is presented. The traditional techniques utilizing volume-preset ventilators with relatively large tidal volumes remain valid and desirable for the vast majority of patients requiring mechanical ventilation. Neuromuscular blockade is best avoided in these patients. However, adequate analgesia, amnesia, and sedation are required. For patients with severe lung disease, alveolar overdistention and hyperoxia should be avoided and may be best accomplished by total ventilatory support techniques, such as pressure control. Total ventilatory support requires neuromuscular blockade and may not provide eucapnic ventilation.

  2. Time reducing exposure containing 18 fluorine flourodeoxyglucose master vial dispensing in hot lab: Omega technique

    PubMed Central

    Rao, Vatturi Venkata Satya Prabhakar; Manthri, Ranadheer; Hemalatha, Pottumuthu; Kumar, Vuyyuru Navin; Azhar, Mohammad

    2016-01-01

    Hot lab dispensing of large doses of 18 fluorine fluorodeoxyglucose in master vials supplied from the cyclotrons requires high degrees of skill to handle high doses. Presently practiced conventional method of fractionating from the inverted tiltable vial pig mounted on a metal frame has its own limitations such as increasing isotope handling times and exposure to the technologist. Innovative technique devised markedly improves the fractionating efficiency along with speed, precision, and reduced dose exposure. PMID:27095872

  3. Method and apparatus for phase and amplitude detection

    DOEpatents

    Cernosek, R.W.; Frye, G.C.; Martin, S.J.

    1998-06-09

    A new class of techniques has been developed which allow inexpensive application of SAW-type chemical sensor devices while retaining high sensitivity (ppm) to chemical detection. The new techniques do not require that the sensor be part of an oscillatory circuit, allowing large concentrations of, e.g., chemical vapors in air, to be accurately measured without compromising the capacity to measure trace concentrations. Such devices have numerous potential applications in environmental monitoring, from manufacturing environments to environmental restoration. 12 figs.

  4. SKYMAP system description: Star catalog data base generation and utilization

    NASA Technical Reports Server (NTRS)

    Gottlieb, D. M.

    1979-01-01

    The specifications, design, software description, and use of the SKYMAP star catalog system are detailed. The SKYMAP system was developed to provide an accurate and complete catalog of all stars with blue or visual magnitudes brighter than 9.0 for use by attitude determination programs. Because of the large number of stars which are brighter than 9.0 magnitude, efficient techniques of manipulating and accessing the data were required. These techniques of staged distillation of data from a Master Catalog to a Core Catalog, and direct access of overlapping zone catalogs, form the basis of the SKYMAP system. The collection and tranformation of data required to produce the Master Catalog data base is described. The data flow through the main programs and levels of star catalogs is detailed. The mathematical and logical techniques for each program and the format of all catalogs are documented.

  5. Propeller Flaps: A Review of Indications, Technique, and Results

    PubMed Central

    D'Arpa, Salvatore; Toia, Francesca; Pirrello, Roberto; Moschella, Francesco; Cordova, Adriana

    2014-01-01

    In the last years, propeller flaps have become an appealing option for coverage of a large range of defects. Besides having a more reliable vascular pedicle than traditional flap, propeller flaps allow for great freedom in design and for wide mobilization that extend the possibility of reconstructing difficult wounds with local tissues and minimal donor-site morbidity. They also allow one-stage reconstruction of defects that usually require multiple procedures. Harvesting of a propeller flap requires accurate patient selection, preoperative planning, and dissection technique. Complication rate can be kept low, provided that potential problems are prevented, promptly recognized, and adequately treated. This paper reviews current knowledge on propeller flaps. Definition, classification, and indications in the different body regions are discussed based on a review of the literature and on the authors' experience. Details about surgical technique are provided, together with tips to avoid and manage complications. PMID:24971367

  6. A Novel, Low-Volume Method for Organ Culture of Embryonic Kidneys That Allows Development of Cortico-Medullary Anatomical Organization

    PubMed Central

    Sebinger, David D. R.; Unbekandt, Mathieu; Ganeva, Veronika V.; Ofenbauer, Andreas; Werner, Carsten; Davies, Jamie A.

    2010-01-01

    Here, we present a novel method for culturing kidneys in low volumes of medium that offers more organotypic development compared to conventional methods. Organ culture is a powerful technique for studying renal development. It recapitulates many aspects of early development very well, but the established techniques have some disadvantages: in particular, they require relatively large volumes (1–3 mls) of culture medium, which can make high-throughput screens expensive, they require porous (filter) substrates which are difficult to modify chemically, and the organs produced do not achieve good cortico-medullary zonation. Here, we present a technique of growing kidney rudiments in very low volumes of medium–around 85 microliters–using silicone chambers. In this system, kidneys grow directly on glass, grow larger than in conventional culture and develop a clear anatomical cortico-medullary zonation with extended loops of Henle. PMID:20479933

  7. Analysis of peptides using an integrated microchip HPLC-MS/MS system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirby, Brian J.; Chirica, Gabriela S.; Reichmuth, David S.

    Hyphendated LC-MS techniques are quickly becoming the standard tool for protemic analyses. For large homogeneous samples, bulk processing methods and capillary injection and separation techniques are suitable. However, for analysis of small or heterogeneous samples, techniques that can manipulate picoliter samples without dilution are required or samples will be lost or corrupted; further, static nanospray-type flowrates are required to maximize SNR. Microchip-level integration of sample injection with separation and mass spectrometry allow small-volume analytes to be processed on chip and immediately injected without dilution for analysis. An on-chip HPLC was fabricated using in situ polymerization of both fixed and mobilemore » polymer monoliths. Integration of the chip with a nanospray MS emitter enables identification of peptides by the use of tandem MS. The chip is capable of analyzing of very small sample volumes (< 200 pl) in short times (< 3 min).« less

  8. Knowledge structure representation and automated updates in intelligent information management systems

    NASA Technical Reports Server (NTRS)

    Corey, Stephen; Carnahan, Richard S., Jr.

    1990-01-01

    A continuing effort to apply rapid prototyping and Artificial Intelligence techniques to problems associated with projected Space Station-era information management systems is examined. In particular, timely updating of the various databases and knowledge structures within the proposed intelligent information management system (IIMS) is critical to support decision making processes. Because of the significantly large amounts of data entering the IIMS on a daily basis, information updates will need to be automatically performed with some systems requiring that data be incorporated and made available to users within a few hours. Meeting these demands depends first, on the design and implementation of information structures that are easily modified and expanded, and second, on the incorporation of intelligent automated update techniques that will allow meaningful information relationships to be established. Potential techniques are studied for developing such an automated update capability and IIMS update requirements are examined in light of results obtained from the IIMS prototyping effort.

  9. Comparative Study of Neural Network Frameworks for the Next Generation of Adaptive Optics Systems.

    PubMed

    González-Gutiérrez, Carlos; Santos, Jesús Daniel; Martínez-Zarzuela, Mario; Basden, Alistair G; Osborn, James; Díaz-Pernas, Francisco Javier; De Cos Juez, Francisco Javier

    2017-06-02

    Many of the next generation of adaptive optics systems on large and extremely large telescopes require tomographic techniques in order to correct for atmospheric turbulence over a large field of view. Multi-object adaptive optics is one such technique. In this paper, different implementations of a tomographic reconstructor based on a machine learning architecture named "CARMEN" are presented. Basic concepts of adaptive optics are introduced first, with a short explanation of three different control systems used on real telescopes and the sensors utilised. The operation of the reconstructor, along with the three neural network frameworks used, and the developed CUDA code are detailed. Changes to the size of the reconstructor influence the training and execution time of the neural network. The native CUDA code turns out to be the best choice for all the systems, although some of the other frameworks offer good performance under certain circumstances.

  10. Nonlinear compensation techniques for magnetic suspension systems. Ph.D. Thesis - MIT

    NASA Technical Reports Server (NTRS)

    Trumper, David L.

    1991-01-01

    In aerospace applications, magnetic suspension systems may be required to operate over large variations in air-gap. Thus the nonlinearities inherent in most types of suspensions have a significant effect. Specifically, large variations in operating point may make it difficult to design a linear controller which gives satisfactory stability and performance over a large range of operating points. One way to address this problem is through the use of nonlinear compensation techniques such as feedback linearization. Nonlinear compensators have received limited attention in the magnetic suspension literature. In recent years, progress has been made in the theory of nonlinear control systems, and in the sub-area of feedback linearization. The idea is demonstrated of feedback linearization using a second order suspension system. In the context of the second order suspension, sampling rate issues in the implementation of feedback linearization are examined through simulation.

  11. Comparative Study of Neural Network Frameworks for the Next Generation of Adaptive Optics Systems

    PubMed Central

    González-Gutiérrez, Carlos; Santos, Jesús Daniel; Martínez-Zarzuela, Mario; Basden, Alistair G.; Osborn, James; Díaz-Pernas, Francisco Javier; De Cos Juez, Francisco Javier

    2017-01-01

    Many of the next generation of adaptive optics systems on large and extremely large telescopes require tomographic techniques in order to correct for atmospheric turbulence over a large field of view. Multi-object adaptive optics is one such technique. In this paper, different implementations of a tomographic reconstructor based on a machine learning architecture named “CARMEN” are presented. Basic concepts of adaptive optics are introduced first, with a short explanation of three different control systems used on real telescopes and the sensors utilised. The operation of the reconstructor, along with the three neural network frameworks used, and the developed CUDA code are detailed. Changes to the size of the reconstructor influence the training and execution time of the neural network. The native CUDA code turns out to be the best choice for all the systems, although some of the other frameworks offer good performance under certain circumstances. PMID:28574426

  12. Chapter 15: Preparations of entomopathogens and diseased specimens for more detailed study using microscopy

    USDA-ARS?s Scientific Manuscript database

    The science of Insect Pathology encompasses a diverse assemblage of pathogens from a large and varied group of hosts. Microscopy techniques and protocols for these organisms are complex and varied and often require modifications and adaptations of standard procedures. The objective of this chapter...

  13. The Use of Computer Simulation Techniques in Educational Planning.

    ERIC Educational Resources Information Center

    Wilson, Charles Z.

    Computer simulations provide powerful models for establishing goals, guidelines, and constraints in educational planning. They are dynamic models that allow planners to examine logical descriptions of organizational behavior over time as well as permitting consideration of the large and complex systems required to provide realistic descriptions of…

  14. Collected Notes on the Workshop for Pattern Discovery in Large Databases

    NASA Technical Reports Server (NTRS)

    Buntine, Wray (Editor); Delalto, Martha (Editor)

    1991-01-01

    These collected notes are a record of material presented at the Workshop. The core data analysis is addressed that have traditionally required statistical or pattern recognition techniques. Some of the core tasks include classification, discrimination, clustering, supervised and unsupervised learning, discovery and diagnosis, i.e., general pattern discovery.

  15. A Theory of Term Importance in Automatic Text Analysis.

    ERIC Educational Resources Information Center

    Salton, G.; And Others

    Most existing automatic content analysis and indexing techniques are based on work frequency characteristics applied largely in an ad hoc manner. Contradictory requirements arise in this connection, in that terms exhibiting high occurrence frequencies in individual documents are often useful for high recall performance (to retrieve many relevant…

  16. Tackling the challenges of fully immersive head-mounted AR devices

    NASA Astrophysics Data System (ADS)

    Singer, Wolfgang; Hillenbrand, Matthias; Münz, Holger

    2017-11-01

    The optical requirements of fully immersive head mounted AR devices are inherently determined by the human visual system. The etendue of the visual system is large. As a consequence, the requirements for fully immersive head-mounted AR devices exceeds almost any high end optical system. Two promising solutions to achieve the large etendue and their challenges are discussed. Head-mounted augmented reality devices have been developed for decades - mostly for application within aircrafts and in combination with a heavy and bulky helmet. The established head-up displays for applications within automotive vehicles typically utilize similar techniques. Recently, there is the vision of eyeglasses with included augmentation, offering a large field of view, and being unobtrusively all-day wearable. There seems to be no simple solution to reach the functional performance requirements. Known technical solutions paths seem to be a dead-end, and some seem to offer promising perspectives, however with severe limitations. As an alternative, unobtrusively all-day wearable devices with a significantly smaller field of view are already possible.

  17. A digital gigapixel large-format tile-scan camera.

    PubMed

    Ben-Ezra, M

    2011-01-01

    Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.

  18. 3D-Laser-Scanning Technique Applied to Bulk Density Measurements of Apollo Lunar Samples

    NASA Technical Reports Server (NTRS)

    Macke, R. J.; Kent, J. J.; Kiefer, W. S.; Britt, D. T.

    2015-01-01

    In order to better interpret gravimetric data from orbiters such as GRAIL and LRO to understand the subsurface composition and structure of the lunar crust, it is import to have a reliable database of the density and porosity of lunar materials. To this end, we have been surveying these physical properties in both lunar meteorites and Apollo lunar samples. To measure porosity, both grain density and bulk density are required. For bulk density, our group has historically utilized sub-mm bead immersion techniques extensively, though several factors have made this technique problematic for our work with Apollo samples. Samples allocated for measurement are often smaller than optimal for the technique, leading to large error bars. Also, for some samples we were required to use pure alumina beads instead of our usual glass beads. The alumina beads were subject to undesirable static effects, producing unreliable results. Other investigators have tested the use of 3d laser scanners on meteorites for measuring bulk volumes. Early work, though promising, was plagued with difficulties including poor response on dark or reflective surfaces, difficulty reproducing sharp edges, and large processing time for producing shape models. Due to progress in technology, however, laser scanners have improved considerably in recent years. We tested this technique on 27 lunar samples in the Apollo collection using a scanner at NASA Johnson Space Center. We found it to be reliable and more precise than beads, with the added benefit that it involves no direct contact with the sample, enabling the study of particularly friable samples for which bead immersion is not possible

  19. Shuttle cryogenic supply system. Optimization study. Volume 5 B-1: Programmers manual for math models

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A computer program for rapid parametric evaluation of various types of cryogenics spacecraft systems is presented. The mathematical techniques of the program provide the capability for in-depth analysis combined with rapid problem solution for the production of a large quantity of soundly based trade-study data. The program requires a large data bank capable of providing characteristics performance data for a wide variety of component assemblies used in cryogenic systems. The program data requirements are divided into: (1) the semipermanent data tables and source data for performance characteristics and (2) the variable input data which contains input parameters which may be perturbated for parametric system studies.

  20. Joining of Silicon Carbide-Based Ceramics by Reaction Forming Method

    NASA Technical Reports Server (NTRS)

    Singh, M.; Kiser, J. D.

    1997-01-01

    Recently, there has been a surge of interest in the development and testing of silicon-based ceramics and composite components for a number of aerospace and ground based systems. The designs often require fabrication of complex shaped parts which can be quite expensive. One attractive way of achieving this goal is to build up complex shapes by joining together geometrically simple shapes. However, the joints should have good mechanical strength and environmental stability comparable to the bulk materials. These joints should also be able to maintain their structural integrity at high temperatures. In addition, the joining technique should be practical, reliable, and affordable. Thus, joining has been recognized as one of the enabling technologies for the successful utilization of silicon carbide based ceramic components in high temperature applications. Overviews of various joining techniques, i.e., mechanical fastening, adhesive bonding, welding, brazing, and soldering have been provided in recent publications. The majority of the techniques used today are based on the joining of monolithic ceramics with metals either by diffusion bonding, metal brazing, brazing with oxides and oxynitrides, or diffusion welding. These techniques need either very high temperatures for processing or hot pressing (high pressures). The joints produced by these techniques have different thermal expansion coefficients than the ceramic materials, which creates a stress concentration in the joint area. The use temperatures for these joints are around 700 C. Ceramic joint interlayers have been developed as a means of obtaining high temperature joints. These joint interlayers have been produced via pre-ceramic polymers, in-situ displacement reactions, and reaction bonding techniques. Joints produced by the pre-ceramic polymer approach exhibit a large amounts of porosity and poor mechanical properties. On the other hand, hot pressing or high pressures are needed for in-situ displacement reactions and reaction bonding techniques. Due to the equipment required, these techniques are impractical for joining large or complex shaped components.

  1. Information Management for a Large Multidisciplinary Project

    NASA Technical Reports Server (NTRS)

    Jones, Kennie H.; Randall, Donald P.; Cronin, Catherine K.

    1992-01-01

    In 1989, NASA's Langley Research Center (LaRC) initiated the High-Speed Airframe Integration Research (HiSAIR) Program to develop and demonstrate an integrated environment for high-speed aircraft design using advanced multidisciplinary analysis and optimization procedures. The major goals of this program were to evolve the interactions among disciplines and promote sharing of information, to provide a timely exchange of information among aeronautical disciplines, and to increase the awareness of the effects each discipline has upon other disciplines. LaRC historically has emphasized the advancement of analysis techniques. HiSAIR was founded to synthesize these advanced methods into a multidisciplinary design process emphasizing information feedback among disciplines and optimization. Crucial to the development of such an environment are the definition of the required data exchanges and the methodology for both recording the information and providing the exchanges in a timely manner. These requirements demand extensive use of data management techniques, graphic visualization, and interactive computing. HiSAIR represents the first attempt at LaRC to promote interdisciplinary information exchange on a large scale using advanced data management methodologies combined with state-of-the-art, scientific visualization techniques on graphics workstations in a distributed computing environment. The subject of this paper is the development of the data management system for HiSAIR.

  2. Design of a practical model-observer-based image quality assessment method for x-ray computed tomography imaging systems

    PubMed Central

    Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.

    2016-01-01

    Abstract. The use of a channelization mechanism on model observers not only makes mimicking human visual behavior possible, but also reduces the amount of image data needed to estimate the model observer parameters. The channelized Hotelling observer (CHO) and channelized scanning linear observer (CSLO) have recently been used to assess CT image quality for detection tasks and combined detection/estimation tasks, respectively. Although the use of channels substantially reduces the amount of data required to compute image quality, the number of scans required for CT imaging is still not practical for routine use. It is our desire to further reduce the number of scans required to make CHO or CSLO an image quality tool for routine and frequent system validations and evaluations. This work explores different data-reduction schemes and designs an approach that requires only a few CT scans. Three different kinds of approaches are included in this study: a conventional CHO/CSLO technique with a large sample size, a conventional CHO/CSLO technique with fewer samples, and an approach that we will show requires fewer samples to mimic conventional performance with a large sample size. The mean value and standard deviation of areas under ROC/EROC curve were estimated using the well-validated shuffle approach. The results indicate that an 80% data reduction can be achieved without loss of accuracy. This substantial data reduction is a step toward a practical tool for routine-task-based QA/QC CT system assessment. PMID:27493982

  3. UFO: a web server for ultra-fast functional profiling of whole genome protein sequences.

    PubMed

    Meinicke, Peter

    2009-09-02

    Functional profiling is a key technique to characterize and compare the functional potential of entire genomes. The estimation of profiles according to an assignment of sequences to functional categories is a computationally expensive task because it requires the comparison of all protein sequences from a genome with a usually large database of annotated sequences or sequence families. Based on machine learning techniques for Pfam domain detection, the UFO web server for ultra-fast functional profiling allows researchers to process large protein sequence collections instantaneously. Besides the frequencies of Pfam and GO categories, the user also obtains the sequence specific assignments to Pfam domain families. In addition, a comparison with existing genomes provides dissimilarity scores with respect to 821 reference proteomes. Considering the underlying UFO domain detection, the results on 206 test genomes indicate a high sensitivity of the approach. In comparison with current state-of-the-art HMMs, the runtime measurements show a considerable speed up in the range of four orders of magnitude. For an average size prokaryotic genome, the computation of a functional profile together with its comparison typically requires about 10 seconds of processing time. For the first time the UFO web server makes it possible to get a quick overview on the functional inventory of newly sequenced organisms. The genome scale comparison with a large number of precomputed profiles allows a first guess about functionally related organisms. The service is freely available and does not require user registration or specification of a valid email address.

  4. MOSAIC - A space-multiplexing technique for optical processing of large images

    NASA Technical Reports Server (NTRS)

    Athale, Ravindra A.; Astor, Michael E.; Yu, Jeffrey

    1993-01-01

    A technique for Fourier processing of images larger than the space-bandwidth products of conventional or smart spatial light modulators and two-dimensional detector arrays is described. The technique involves a spatial combination of subimages displayed on individual spatial light modulators to form a phase-coherent image, which is subsequently processed with Fourier optical techniques. Because of the technique's similarity with the mosaic technique used in art, the processor used is termed an optical MOSAIC processor. The phase accuracy requirements of this system were studied by computer simulation. It was found that phase errors of less than lambda/8 did not degrade the performance of the system and that the system was relatively insensitive to amplitude nonuniformities. Several schemes for implementing the subimage combination are described. Initial experimental results demonstrating the validity of the mosaic concept are also presented.

  5. Development of tritium permeation barriers on Al base in Europe

    NASA Astrophysics Data System (ADS)

    Benamati, G.; Chabrol, C.; Perujo, A.; Rigal, E.; Glasbrenner, H.

    The development of the water cooled lithium lead (WCLL) DEMO fusion reactor requires the production of a material capable of acting as a tritium permeation barrier (TPB). In the DEMO blanket reactor permeation barriers on the structural material are required to reduce the tritium permeation from the Pb-17Li or the plasma into the cooling water to acceptable levels (<1 g/d). Because of experimental work previously performed, one of the most promising TPB candidates is A1 base coatings. Within the EU a large R&D programme is in progress to develop a TPB fabrication technique, compatible with the structural materials requirements and capable of producing coatings with acceptable performances. The research is focused on chemical vapour deposition (CVD), hot dipping, hot isostatic pressing (HIP) technology and spray (this one developed also for repair) deposition techniques. The final goal is to select a reference technique to be used in the blanket of the DEMO reactor and in the ITER test module fabrication. The activities performed in four European laboratories are summarised here.

  6. Avionic Architecture for Model Predictive Control Application in Mars Sample & Return Rendezvous Scenario

    NASA Astrophysics Data System (ADS)

    Saponara, M.; Tramutola, A.; Creten, P.; Hardy, J.; Philippe, C.

    2013-08-01

    Optimization-based control techniques such as Model Predictive Control (MPC) are considered extremely attractive for space rendezvous, proximity operations and capture applications that require high level of autonomy, optimal path planning and dynamic safety margins. Such control techniques require high-performance computational needs for solving large optimization problems. The development and implementation in a flight representative avionic architecture of a MPC based Guidance, Navigation and Control system has been investigated in the ESA R&T study “On-line Reconfiguration Control System and Avionics Architecture” (ORCSAT) of the Aurora programme. The paper presents the baseline HW and SW avionic architectures, and verification test results obtained with a customised RASTA spacecraft avionics development platform from Aeroflex Gaisler.

  7. Analysis of a crossed Bragg-cell acousto optical spectrometer for SETI

    NASA Technical Reports Server (NTRS)

    Gulkis, S.

    1986-01-01

    The search for radio signals from extraterrestrial intelligent (SETI) beings requires the use of large instantaneous bandwidth (500 MHz) and high resolution (20 Hz) spectrometers. Digital systems with a high degree of modularity can be used to provide this capability, and this method has been widely discussed. Another technique for meeting the SETI requirement is to use a crossed Bragg-cell spectrometer as described by Psaltis and Casasent (1979). This technique makes use of the Folded Spectrum concept, introduced by Thomas (1966). The Folded Spectrum is a two-dimensional Fourier Transform of a raster scanned one-dimensional signal. It is directly related to the long one-dimensional spectrum of the original signal and is ideally suited for optical signal processing.

  8. Analysis of a crossed Bragg-cell acousto optical spectrometer for SETI

    NASA Astrophysics Data System (ADS)

    Gulkis, S.

    1986-10-01

    The search for radio signals from extraterrestrial intelligent (SETI) beings requires the use of large instantaneous bandwidth (500 MHz) and high resolution (20 Hz) spectrometers. Digital systems with a high degree of modularity can be used to provide this capability, and this method has been widely discussed. Another technique for meeting the SETI requirement is to use a crossed Bragg-cell spectrometer as described by Psaltis and Casasent (1979). This technique makes use of the Folded Spectrum concept, introduced by Thomas (1966). The Folded Spectrum is a two-dimensional Fourier Transform of a raster scanned one-dimensional signal. It is directly related to the long one-dimensional spectrum of the original signal and is ideally suited for optical signal processing.

  9. The influence of orientation on the stress rupture properties of nickel-base superalloy single crystals

    NASA Technical Reports Server (NTRS)

    Mackay, R. A.; Maier, R. D.

    1982-01-01

    Constant load creep rupture tests were performed on MAR-M247 single crystals at 724 MPa and 774 C where the effect of anisotropy is prominent. The initial orientations of the specimens as well as the final orientations of selected crystals after stress rupture testing were determined by the Laue back-reflection X-ray technique. The stress rupture lives of the MAR-M247 single crystals were found to be largely determined by the lattice rotations required to produce intersecting slip, because second-stage creep does not begin until after the onset of intersecting slip. Crystals which required large rotations to become oriented for intersecting slip exhibited the shortest stress rupture lives, whereas crystals requiring little or no rotations exhibited the lowest minimum creep rates, and consequently, the longest stress rupture lives.

  10. Data Compression Techniques for Advanced Space Transportation Systems

    NASA Technical Reports Server (NTRS)

    Bradley, William G.

    1998-01-01

    Advanced space transportation systems, including vehicle state of health systems, will produce large amounts of data which must be stored on board the vehicle and or transmitted to the ground and stored. The cost of storage or transmission of the data could be reduced if the number of bits required to represent the data is reduced by the use of data compression techniques. Most of the work done in this study was rather generic and could apply to many data compression systems, but the first application area to be considered was launch vehicle state of health telemetry systems. Both lossless and lossy compression techniques were considered in this study.

  11. Neutron-based nonintrusive inspection techniques

    NASA Astrophysics Data System (ADS)

    Gozani, Tsahi

    1997-02-01

    Non-intrusive inspection of large objects such as trucks, sea-going shipping containers, air cargo containers and pallets is gaining attention as a vital tool in combating terrorism, drug smuggling and other violation of international and national transportation and Customs laws. Neutrons are the preferred probing radiation when material specificity is required, which is most often the case. Great strides have been made in neutron based inspection techniques. Fast and thermal neutrons, whether in steady state or in microsecond, or even nanosecond pulses are being employed to interrogate, at high speeds, for explosives, drugs, chemical agents, and nuclear and many other smuggled materials. Existing neutron techniques will be compared and their current status reported.

  12. Affinity+: Semi-Structured Brainstorming on Large Displays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burtner, Edwin R.; May, Richard A.; Scarberry, Randall E.

    2013-04-27

    Affinity diagraming is a powerful method for encouraging and capturing lateral thinking in a group environment. The Affinity+ Concept was designed to improve the collaborative brainstorm process through the use of large display surfaces in conjunction with mobile devices like smart phones and tablets. The system works by capturing the ideas digitally and allowing users to sort and group them on a large touch screen manually. Additionally, Affinity+ incorporates theme detection, topic clustering, and other processing algorithms that help bring structured analytic techniques to the process without requiring explicit leadership roles and other overhead typically involved in these activities.

  13. Digital robust active control law synthesis for large order flexible structure using parameter optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.

    1988-01-01

    A generic procedure for the parameter optimization of a digital control law for a large-order flexible flight vehicle or large space structure modeled as a sampled data system is presented. A linear quadratic Guassian type cost function was minimized, while satisfying a set of constraints on the steady-state rms values of selected design responses, using a constrained optimization technique to meet multiple design requirements. Analytical expressions for the gradients of the cost function and the design constraints on mean square responses with respect to the control law design variables are presented.

  14. Linear chirp phase perturbing approach for finding binary phased codes

    NASA Astrophysics Data System (ADS)

    Li, Bing C.

    2017-05-01

    Binary phased codes have many applications in communication and radar systems. These applications require binary phased codes to have low sidelobes in order to reduce interferences and false detection. Barker codes are the ones that satisfy these requirements and they have lowest maximum sidelobes. However, Barker codes have very limited code lengths (equal or less than 13) while many applications including low probability of intercept radar, and spread spectrum communication, require much higher code lengths. The conventional techniques of finding binary phased codes in literatures include exhaust search, neural network, and evolutionary methods, and they all require very expensive computation for large code lengths. Therefore these techniques are limited to find binary phased codes with small code lengths (less than 100). In this paper, by analyzing Barker code, linear chirp, and P3 phases, we propose a new approach to find binary codes. Experiments show that the proposed method is able to find long low sidelobe binary phased codes (code length >500) with reasonable computational cost.

  15. ITER-relevant calibration technique for soft x-ray spectrometer.

    PubMed

    Rzadkiewicz, J; Książek, I; Zastrow, K-D; Coffey, I H; Jakubowska, K; Lawson, K D

    2010-10-01

    The ITER-oriented JET research program brings new requirements for the low-Z impurity monitoring, in particular for the Be—the future main wall component of JET and ITER. Monitoring based on Bragg spectroscopy requires an absolute sensitivity calibration, which is challenging for large tokamaks. This paper describes both “component-by-component” and “continua” calibration methods used for the Be IV channel (75.9 Å) of the Bragg rotor spectrometer deployed on JET. The calibration techniques presented here rely on multiorder reflectivity calculations and measurements of continuum radiation emitted from helium plasmas. These offer excellent conditions for the absolute photon flux calibration due to their low level of impurities. It was found that the component-by-component method gives results that are four times higher than those obtained by means of the continua method. A better understanding of this discrepancy requires further investigations.

  16. Design and development of a quad copter (UMAASK) using CAD/CAM/CAE

    NASA Astrophysics Data System (ADS)

    Manarvi, Irfan Anjum; Aqib, Muhammad; Ajmal, Muhammad; Usman, Muhammad; Khurshid, Saqib; Sikandar, Usman

    Micro flying vehicles1 (MFV) have become a popular area of research due to economy of production, flexibility of launch and variety of applications. A large number of techniques from pencil sketching to computer based software are being used for designing specific geometries and selection of materials to arrive at novel designs for specific requirements. Present research was focused on development of suitable design configuration using CAD/CAM/CAE tools and techniques. A number of designs were reviewed for this purpose. Finally, rotary wing Quadcopter flying vehicle design was considered appropriate for this research. Performance requirements were planned as approximately 10 meters ceiling, weight less than 500grams and ability to take videos and pictures. Parts were designed using Finite Element Analysis, manufactured using CNC machines and assembled to arrive at final design named as UMAASK. Flight tests were carried out which confirmed the design requirements.

  17. Superdense teleportation using hyperentangled photons

    PubMed Central

    Graham, Trent M.; Bernstein, Herbert J.; Wei, Tzu-Chieh; Junge, Marius; Kwiat, Paul G

    2015-01-01

    Transmitting quantum information between two remote parties is a requirement for many quantum applications; however, direct transmission of states is often impossible because of noise and loss in the communication channel. Entanglement-enhanced state communication can be used to avoid this issue, but current techniques require extensive experimental resources to transmit large quantum states deterministically. To reduce these resource requirements, we use photon pairs hyperentangled in polarization and orbital angular momentum to implement superdense teleportation, which can communicate a specific class of single-photon ququarts. We achieve an average fidelity of 87.0(1)%, almost twice the classical limit of 44% with reduced experimental resources than traditional techniques. We conclude by discussing the information content of this constrained set of states and demonstrate that this set has an exponentially larger state space volume than the lower-dimensional general states with the same number of state parameters. PMID:26018201

  18. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    PubMed

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  19. Direct shear mapping - a new weak lensing tool

    NASA Astrophysics Data System (ADS)

    de Burgh-Day, C. O.; Taylor, E. N.; Webster, R. L.; Hopkins, A. M.

    2015-08-01

    We have developed a new technique called direct shear mapping (DSM) to measure gravitational lensing shear directly from observations of a single background source. The technique assumes the velocity map of an unlensed, stably rotating galaxy will be rotationally symmetric. Lensing distorts the velocity map making it asymmetric. The degree of lensing can be inferred by determining the transformation required to restore axisymmetry. This technique is in contrast to traditional weak lensing methods, which require averaging an ensemble of background galaxy ellipticity measurements, to obtain a single shear measurement. We have tested the efficacy of our fitting algorithm with a suite of systematic tests on simulated data. We demonstrate that we are in principle able to measure shears as small as 0.01. In practice, we have fitted for the shear in very low redshift (and hence unlensed) velocity maps, and have obtained null result with an error of ±0.01. This high-sensitivity results from analysing spatially resolved spectroscopic images (i.e. 3D data cubes), including not just shape information (as in traditional weak lensing measurements) but velocity information as well. Spirals and rotating ellipticals are ideal targets for this new technique. Data from any large Integral Field Unit (IFU) or radio telescope is suitable, or indeed any instrument with spatially resolved spectroscopy such as the Sydney-Australian-Astronomical Observatory Multi-Object Integral Field Spectrograph (SAMI), the Atacama Large Millimeter/submillimeter Array (ALMA), the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) and the Square Kilometer Array (SKA).

  20. Model Analysis of an Aircraft Fueslage Panel using Experimental and Finite-Element Techniques

    NASA Technical Reports Server (NTRS)

    Fleming, Gary A.; Buehrle, Ralph D.; Storaasli, Olaf L.

    1998-01-01

    The application of Electro-Optic Holography (EOH) for measuring the center bay vibration modes of an aircraft fuselage panel under forced excitation is presented. The requirement of free-free panel boundary conditions made the acquisition of quantitative EOH data challenging since large scale rigid body motions corrupted measurements of the high frequency vibrations of interest. Image processing routines designed to minimize effects of large scale motions were applied to successfully resurrect quantitative EOH vibrational amplitude measurements

  1. Large area ion beam sputtered YBa2Cu3O(7-delta) films for novel device structures

    NASA Astrophysics Data System (ADS)

    Gauzzi, A.; Lucia, M. L.; Kellett, B. J.; James, J. H.; Pavuna, D.

    1992-03-01

    A simple single-target ion-beam system is employed to manufacture large areas of uniformly superconducting YBa2Cu3O(7-delta) films which can be reproduced. The required '123' stoichiometry is transferred from the target to the substrate when ion-beam power, target/ion-beam angle, and target temperature are adequately controlled. Ion-beam sputtering is experimentally demonstrated to be an effective technique for producing homogeneous YBa2Cu3O(7-delta) films.

  2. Development of CCD imaging sensors for space applications, phase 1

    NASA Technical Reports Server (NTRS)

    Antcliffe, G. A.

    1975-01-01

    The results of an experimental investigation to develop a large area charge coupled device (CCD) imager for space photography applications are described. Details of the design and processing required to achieve 400 X 400 imagers are presented together with a discussion of the optical characterization techniques developed for this program. A discussion of several aspects of large CCD performance is given with detailed test reports. The areas covered include dark current, uniformity of optical response, square wave amplitude response, spectral responsivity and dynamic range.

  3. Perspectives in astrophysical databases

    NASA Astrophysics Data System (ADS)

    Frailis, Marco; de Angelis, Alessandro; Roberto, Vito

    2004-07-01

    Astrophysics has become a domain extremely rich of scientific data. Data mining tools are needed for information extraction from such large data sets. This asks for an approach to data management emphasizing the efficiency and simplicity of data access; efficiency is obtained using multidimensional access methods and simplicity is achieved by properly handling metadata. Moreover, clustering and classification techniques on large data sets pose additional requirements in terms of computation and memory scalability and interpretability of results. In this study we review some possible solutions.

  4. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...

    2017-01-28

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  5. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  6. Application of Semi Active Control Techniques to the Damping Suppression Problem of Solar Sail Booms

    NASA Technical Reports Server (NTRS)

    Adetona, O.; Keel, L. H.; Whorton, M. S.

    2007-01-01

    Solar sails provide a propellant free form for space propulsion. These are large flat surfaces that generate thrust when they are impacted by light. When attached to a space vehicle, the thrust generated can propel the space vehicle to great distances at significant speeds. For optimal performance the sail must be kept from excessive vibration. Active control techniques can provide the best performance. However, they require an external power-source that may create significant parasitic mass to the solar sail. However, solar sails require low mass for optimal performance. Secondly, active control techniques typically require a good system model to ensure stability and performance. However, the accuracy of solar sail models validated on earth for a space environment is questionable. An alternative approach is passive vibration techniques. These do not require an external power supply, and do not destabilize the system. A third alternative is referred to as semi-active control. This approach tries to get the best of both active and passive control, while avoiding their pitfalls. In semi-active control, an active control law is designed for the system, and passive control techniques are used to implement it. As a result, no external power supply is needed so the system is not destabilize-able. Though it typically underperforms active control techniques, it has been shown to out-perform passive control approaches and can be unobtrusively installed on a solar sail boom. Motivated by this, the objective of this research is to study the suitability of a Piezoelectric (PZT) patch actuator/sensor based semi-active control system for the vibration suppression problem of solar sail booms. Accordingly, we develop a suitable mathematical and computer model for such studies and demonstrate the capabilities of the proposed approach with computer simulations.

  7. Semi-automated Neuron Boundary Detection and Nonbranching Process Segmentation in Electron Microscopy Images

    PubMed Central

    Jurrus, Elizabeth; Watanabe, Shigeki; Giuly, Richard J.; Paiva, Antonio R. C.; Ellisman, Mark H.; Jorgensen, Erik M.; Tasdizen, Tolga

    2013-01-01

    Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated process first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes. PMID:22644867

  8. Semi-Automated Neuron Boundary Detection and Nonbranching Process Segmentation in Electron Microscopy Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jurrus, Elizabeth R.; Watanabe, Shigeki; Giuly, Richard J.

    2013-01-01

    Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated processmore » first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes.« less

  9. Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems

    NASA Astrophysics Data System (ADS)

    Koch, Patrick Nathan

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.

  10. Greener Techniques for the Synthesis of Silver Nanoparticles Using Plant Extracts, Enzymes, Bacteria, Biodegradable Polymers, and Microwaves

    EPA Science Inventory

    The use of silver nanoparticles (AgNPs) is gaining in popularity due to silver’s antibacterial properties. Conventional methods for AgNP synthesis require dangerous chemicals and large quantities of energy (heat) and can result in formation of hazardous by-products. This article ...

  11. Exploratory Factor Analysis with Small Sample Sizes

    ERIC Educational Resources Information Center

    de Winter, J. C. F.; Dodou, D.; Wieringa, P. A.

    2009-01-01

    Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes ("N"), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for "N" below 50. Simulations were carried out to estimate the minimum required "N" for different…

  12. Energy requirement for the production of silicon solar arrays

    NASA Technical Reports Server (NTRS)

    Lindmayer, J.; Wihl, M.; Scheinine, A.; Morrison, A.

    1977-01-01

    An assessment of potential changes and alternative technologies which could impact the photovoltaic manufacturing process is presented. Topics discussed include: a multiple wire saw, ribbon growth techniques, silicon casting, and a computer model for a large-scale solar power plant. Emphasis is placed on reducing the energy demands of the manufacturing process.

  13. Welding And Cutting A Nickel Alloy By Laser

    NASA Technical Reports Server (NTRS)

    Banas, C. M.

    1990-01-01

    Technique effective and energy-efficient. Report describes evaluation of laser welding and cutting of Inconel(R) 718. Notes that electron-beam welding processes developed for In-718, but difficult to use on large or complex structures. Cutting of In-718 by laser fast and produces only narrow kerf. Cut edge requires dressing, to endure fatigue.

  14. Microcomputer Techniques for Developing Hardwood Conservation Strategies

    Treesearch

    Steve McNiel

    1991-01-01

    Hardwoods are disappearing from the California landscape at alarming rates. This is due to a variety of influences, both natural and man made. It is clear that conservation and rehabilitation of hardwood resources will require a large effort on the part of research institutes, universities, government agencies, special interest groups, private developers, maintenance...

  15. Program Analyzes Radar Altimeter Data

    NASA Technical Reports Server (NTRS)

    Vandemark, Doug; Hancock, David; Tran, Ngan

    2004-01-01

    A computer program has been written to perform several analyses of radar altimeter data. The program was designed to improve on previous methods of analysis of altimeter engineering data by (1) facilitating and accelerating the analysis of large amounts of data in a more direct manner and (2) improving the ability to estimate performance of radar-altimeter instrumentation and provide data corrections. The data in question are openly available to the international scientific community and can be downloaded from anonymous file-transfer- protocol (FTP) locations that are accessible via links from altimetry Web sites. The software estimates noise in range measurements, estimates corrections for electromagnetic bias, and performs statistical analyses on various parameters for comparison of different altimeters. Whereas prior techniques used to perform similar analyses of altimeter range noise require comparison of data from repetitions of satellite ground tracks, the present software uses a high-pass filtering technique to obtain similar results from single satellite passes. Elimination of the requirement for repeat-track analysis facilitates the analysis of large amounts of satellite data to assess subtle variations in range noise.

  16. Mechanical Harvesting of Aquatic Plants. Report 2. Evaluation of Selected Handling Functions of Mechanical Control.

    DTIC Science & Technology

    1980-06-01

    with the extracted plants. Pusher boats were used to feed the plants into the throat of the conveyor where they were then pulled onto the conveyor by...technique or variations of it that involve extracting from the river periodically on the Withlacoochee River or similar rivers, requires 48 that operations...way to readily estimate the land area required to stockpile the large volumes of material that must be extracted from the water in many operational

  17. Very high-resolution spectroscopy for extremely large telescopes using pupil slicing and adaptive optics.

    PubMed

    Beckers, Jacques M; Andersen, Torben E; Owner-Petersen, Mette

    2007-03-05

    Under seeing limited conditions very high resolution spectroscopy becomes very difficult for extremely large telescopes (ELTs). Using adaptive optics (AO) the stellar image size decreases proportional with the telescope diameter. This makes the spectrograph optics and hence its resolution independent of the telescope diameter. However AO for use with ELTs at visible wavelengths require deformable mirrors with many elements. Those are not likely to be available for quite some time. We propose to use the pupil slicing technique to create a number of sub-pupils each of which having its own deformable mirror. The images from all sub-pupils are combined incoherently with a diameter corresponding to the diffraction limit of the sub-pupil. The technique is referred to as "Pupil Slicing Adaptive Optics" or PSAO.

  18. Low-Power Photoplethysmogram Acquisition Integrated Circuit with Robust Light Interference Compensation.

    PubMed

    Kim, Jongpal; Kim, Jihoon; Ko, Hyoungho

    2015-12-31

    To overcome light interference, including a large DC offset and ambient light variation, a robust photoplethysmogram (PPG) readout chip is fabricated using a 0.13-μm complementary metal-oxide-semiconductor (CMOS) process. Against the large DC offset, a saturation detection and current feedback circuit is proposed to compensate for an offset current of up to 30 μA. For robustness against optical path variation, an automatic emitted light compensation method is adopted. To prevent ambient light interference, an alternating sampling and charge redistribution technique is also proposed. In the proposed technique, no additional power is consumed, and only three differential switches and one capacitor are required. The PPG readout channel consumes 26.4 μW and has an input referred current noise of 260 pArms.

  19. Low-Power Photoplethysmogram Acquisition Integrated Circuit with Robust Light Interference Compensation

    PubMed Central

    Kim, Jongpal; Kim, Jihoon; Ko, Hyoungho

    2015-01-01

    To overcome light interference, including a large DC offset and ambient light variation, a robust photoplethysmogram (PPG) readout chip is fabricated using a 0.13-μm complementary metal–oxide–semiconductor (CMOS) process. Against the large DC offset, a saturation detection and current feedback circuit is proposed to compensate for an offset current of up to 30 μA. For robustness against optical path variation, an automatic emitted light compensation method is adopted. To prevent ambient light interference, an alternating sampling and charge redistribution technique is also proposed. In the proposed technique, no additional power is consumed, and only three differential switches and one capacitor are required. The PPG readout channel consumes 26.4 μW and has an input referred current noise of 260 pArms. PMID:26729122

  20. Frequency-selective quantitation of short-echo time 1H magnetic resonance spectra

    NASA Astrophysics Data System (ADS)

    Poullet, Jean-Baptiste; Sima, Diana M.; Van Huffel, Sabine; Van Hecke, Paul

    2007-06-01

    Accurate and efficient filtering techniques are required to suppress large nuisance components present in short-echo time magnetic resonance (MR) spectra. This paper discusses two powerful filtering techniques used in long-echo time MR spectral quantitation, the maximum-phase FIR filter (MP-FIR) and the Hankel-Lanczos Singular Value Decomposition with Partial ReOrthogonalization (HLSVD-PRO), and shows that they can be applied to their more complex short-echo time spectral counterparts. Both filters are validated and compared through extensive simulations. Their properties are discussed. In particular, the capability of MP-FIR for dealing with macromolecular components is emphasized. Although this property does not make a large difference for long-echo time MR spectra, it can be important when quantifying short-echo time spectra.

  1. Big Data Analytics with Datalog Queries on Spark.

    PubMed

    Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo

    2016-01-01

    There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.

  2. Big Data Analytics with Datalog Queries on Spark

    PubMed Central

    Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo

    2017-01-01

    There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics. PMID:28626296

  3. Efficient techniques for forced response involving linear modal components interconnected by discrete nonlinear connection elements

    NASA Astrophysics Data System (ADS)

    Avitabile, Peter; O'Callahan, John

    2009-01-01

    Generally, response analysis of systems containing discrete nonlinear connection elements such as typical mounting connections require the physical finite element system matrices to be used in a direct integration algorithm to compute the nonlinear response analysis solution. Due to the large size of these physical matrices, forced nonlinear response analysis requires significant computational resources. Usually, the individual components of the system are analyzed and tested as separate components and their individual behavior may essentially be linear when compared to the total assembled system. However, the joining of these linear subsystems using highly nonlinear connection elements causes the entire system to become nonlinear. It would be advantageous if these linear modal subsystems could be utilized in the forced nonlinear response analysis since much effort has usually been expended in fine tuning and adjusting the analytical models to reflect the tested subsystem configuration. Several more efficient techniques have been developed to address this class of problem. Three of these techniques given as: equivalent reduced model technique (ERMT);modal modification response technique (MMRT); andcomponent element method (CEM); are presented in this paper and are compared to traditional methods.

  4. Size Reduction of Hamiltonian Matrix for Large-Scale Energy Band Calculations Using Plane Wave Bases

    NASA Astrophysics Data System (ADS)

    Morifuji, Masato

    2018-01-01

    We present a method of reducing the size of a Hamiltonian matrix used in calculations of electronic states. In the electronic states calculations using plane wave basis functions, a large number of plane waves are often required to obtain precise results. Even using state-of-the-art techniques, the Hamiltonian matrix often becomes very large. The large computational time and memory necessary for diagonalization limit the widespread use of band calculations. We show a procedure of deriving a reduced Hamiltonian constructed using a small number of low-energy bases by renormalizing high-energy bases. We demonstrate numerically that the significant speedup of eigenstates evaluation is achieved without losing accuracy.

  5. Creative user-centered visualization design for energy analysts and modelers.

    PubMed

    Goodwin, Sarah; Dykes, Jason; Jones, Sara; Dillingham, Iain; Dove, Graham; Duffy, Alison; Kachkaev, Alexander; Slingsby, Aidan; Wood, Jo

    2013-12-01

    We enhance a user-centered design process with techniques that deliberately promote creativity to identify opportunities for the visualization of data generated by a major energy supplier. Visualization prototypes developed in this way prove effective in a situation whereby data sets are largely unknown and requirements open - enabling successful exploration of possibilities for visualization in Smart Home data analysis. The process gives rise to novel designs and design metaphors including data sculpting. It suggests: that the deliberate use of creativity techniques with data stakeholders is likely to contribute to successful, novel and effective solutions; that being explicit about creativity may contribute to designers developing creative solutions; that using creativity techniques early in the design process may result in a creative approach persisting throughout the process. The work constitutes the first systematic visualization design for a data rich source that will be increasingly important to energy suppliers and consumers as Smart Meter technology is widely deployed. It is novel in explicitly employing creativity techniques at the requirements stage of visualization design and development, paving the way for further use and study of creativity methods in visualization design.

  6. The Fluorescent-Oil Film Method and Other Techniques for Boundary-Layer Flow Visualization

    NASA Technical Reports Server (NTRS)

    Loving, Donald L.; Katzoff, S.

    1959-01-01

    A flow-visualization technique, known as the fluorescent-oil film method, has been developed which appears to be generally simpler and to require less experience and development of technique than previously published methods. The method is especially adapted to use in the large high-powered wind tunnels which require considerable time to reach the desired test conditions. The method consists of smearing a film of fluorescent oil over a surface and observing where the thickness is affected by the shearing action of the boundary layer. These films are detected and identified, and their relative thicknesses are determined by use of ultraviolet light. Examples are given of the use of this technique. Other methods that show promise in the study of boundary-layer conditions are described. These methods include the use of a temperature-sensitive fluorescent paint and the use of a radiometer that is sensitive to the heat radiation from a surface. Some attention is also given to methods that can be used with a spray apparatus in front of the test model.

  7. Advances in the microrheology of complex fluids

    NASA Astrophysics Data System (ADS)

    Waigh, Thomas Andrew

    2016-07-01

    New developments in the microrheology of complex fluids are considered. Firstly the requirements for a simple modern particle tracking microrheology experiment are introduced, the error analysis methods associated with it and the mathematical techniques required to calculate the linear viscoelasticity. Progress in microrheology instrumentation is then described with respect to detectors, light sources, colloidal probes, magnetic tweezers, optical tweezers, diffusing wave spectroscopy, optical coherence tomography, fluorescence correlation spectroscopy, elastic- and quasi-elastic scattering techniques, 3D tracking, single molecule methods, modern microscopy methods and microfluidics. New theoretical techniques are also reviewed such as Bayesian analysis, oversampling, inversion techniques, alternative statistical tools for tracks (angular correlations, first passage probabilities, the kurtosis, motor protein step segmentation etc), issues in micro/macro rheological agreement and two particle methodologies. Applications where microrheology has begun to make some impact are also considered including semi-flexible polymers, gels, microorganism biofilms, intracellular methods, high frequency viscoelasticity, comb polymers, active motile fluids, blood clots, colloids, granular materials, polymers, liquid crystals and foods. Two large emergent areas of microrheology, non-linear microrheology and surface microrheology are also discussed.

  8. Electrodeposition of organic-inorganic tri-halide perovskites solar cell

    NASA Astrophysics Data System (ADS)

    Charles, U. A.; Ibrahim, M. A.; Teridi, M. A. M.

    2018-02-01

    Perovskite (CH3NH3PbI3) semiconductor materials are promising high-performance light energy absorber for solar cell application. However, the power conversion efficiency of perovskite solar cell is severely affected by the surface quality of the deposited thin film. Spin coating is a low-cost and widely used deposition technique for perovskite solar cell. Notably, film deposited by spin coating evolves surface hydroxide and defeats from uncontrolled precipitation and inter-diffusion reaction. Alternatively, vapor deposition (VD) method produces uniform thin film but requires precise control of complex thermodynamic parameters which makes the technique unsuitable for large scale production. Most deposition techniques for perovskite require tedious surface optimization to improve the surface quality of deposits. Optimization of perovskite surface is necessary to significantly improve device structure and electrical output. In this review, electrodeposition of perovskite solar cell is demonstrated as a scalable and reproducible technique to fabricate uniform and smooth thin film surface that circumvents the need for high vacuum environment. Electrodeposition is achieved at low temperatures, supports precise control and optimization of deposits for efficient charge transfer.

  9. UNCERTAINTY ON RADIATION DOSES ESTIMATED BY BIOLOGICAL AND RETROSPECTIVE PHYSICAL METHODS.

    PubMed

    Ainsbury, Elizabeth A; Samaga, Daniel; Della Monaca, Sara; Marrale, Maurizio; Bassinet, Celine; Burbidge, Christopher I; Correcher, Virgilio; Discher, Michael; Eakins, Jon; Fattibene, Paola; Güçlü, Inci; Higueras, Manuel; Lund, Eva; Maltar-Strmecki, Nadica; McKeever, Stephen; Rääf, Christopher L; Sholom, Sergey; Veronese, Ivan; Wieser, Albrecht; Woda, Clemens; Trompier, Francois

    2018-03-01

    Biological and physical retrospective dosimetry are recognised as key techniques to provide individual estimates of dose following unplanned exposures to ionising radiation. Whilst there has been a relatively large amount of recent development in the biological and physical procedures, development of statistical analysis techniques has failed to keep pace. The aim of this paper is to review the current state of the art in uncertainty analysis techniques across the 'EURADOS Working Group 10-Retrospective dosimetry' members, to give concrete examples of implementation of the techniques recommended in the international standards, and to further promote the use of Monte Carlo techniques to support characterisation of uncertainties. It is concluded that sufficient techniques are available and in use by most laboratories for acute, whole body exposures to highly penetrating radiation, but further work will be required to ensure that statistical analysis is always wholly sufficient for the more complex exposure scenarios.

  10. On the use of distributed sensing in control of large flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Montgomery, Raymond C.; Ghosh, Dave

    1990-01-01

    Distributed processing technology is being developed to process signals from distributed sensors using distributed computations. Thiw work presents a scheme for calculating the operators required to emulate a conventional Kalman filter and regulator using such a computer. The scheme makes use of conventional Kalman theory as applied to the control of large flexible structures. The required computation of the distributed operators given the conventional Kalman filter and regulator is explained. A straightforward application of this scheme may lead to nonsmooth operators whose convergence is not apparent. This is illustrated by application to the Mini-Mast, a large flexible truss at the Langley Research Center used for research in structural dynamics and control. Techniques for developing smooth operators are presented. These involve spatial filtering as well as adjusting the design constants in the Kalman theory. Results are presented that illustrate the degree of smoothness achieved.

  11. Radius of Curvature Measurement of Large Optics Using Interferometry and Laser Tracker

    NASA Technical Reports Server (NTRS)

    Hagopian, John; Connelly, Joseph

    2011-01-01

    The determination of radius of curvature (ROC) of optics typically uses either a phase measuring interferometer on an adjustable stage to determine the position of the ROC and the optics surface under test. Alternatively, a spherometer or a profilometer are used for this measurement. The difficulty of this approach is that for large optics, translation of the interferometer or optic under test is problematic because of the distance of translation required and the mass of the optic. Profilometry and spherometry are alternative techniques that can work, but require a profilometer or a measurement of subapertures of the optic. The proposed approach allows a measurement of the optic figure simultaneous with the full aperture radius of curvature.

  12. Instrumentation for localized superconducting cavity diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conway, Z. A.; Ge, M.; Iwashita, Y.

    2017-01-12

    Superconducting accelerator cavities are now routinely operated at levels approaching the theoretical limit of niobium. To achieve these operating levels more information than is available from the RF excitation signal is required to characterize and determine fixes for the sources of performance limitations. This information is obtained using diagnostic techniques which complement the analysis of the RF signal. In this paper we describe the operation and select results from three of these diagnostic techniques: the use of large scale thermometer arrays, second sound wave defect location and high precision cavity imaging with the Kyoto camera.

  13. Space Construction Automated Fabrication Experiment Definition Study (SCAFEDS). Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The techniques, processes, and equipment required for automatic fabrication and assembly of structural elements in space using the space shuttle as a launch vehicle and construction base were investigated. Additional construction/systems/operational techniques, processes, and equipment which can be developed/demonstrated in the same program to provide further risk reduction benefits to future large space systems were included. Results in the areas of structure/materials, fabrication systems (beam builder, assembly jig, and avionics/controls), mission integration, and programmatics are summarized. Conclusions and recommendations are given.

  14. Improving data quality in neuronal population recordings

    PubMed Central

    Harris, Kenneth D.; Quian Quiroga, Rodrigo; Freeman, Jeremy; Smith, Spencer

    2017-01-01

    Understanding how the brain operates requires understanding how large sets of neurons function together. Modern recording technology makes it possible to simultaneously record the activity of hundreds of neurons, and technological developments will soon allow recording of thousands or tens of thousands. As with all experimental techniques, these methods are subject to confounds that complicate the interpretation of such recordings, and could lead to erroneous scientific conclusions. Here, we discuss methods for assessing and improving the quality of data from these techniques, and outline likely future directions in this field. PMID:27571195

  15. Acceleration techniques for dependability simulation. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Barnette, James David

    1995-01-01

    As computer systems increase in complexity, the need to project system performance from the earliest design and development stages increases. We have to employ simulation for detailed dependability studies of large systems. However, as the complexity of the simulation model increases, the time required to obtain statistically significant results also increases. This paper discusses an approach that is application independent and can be readily applied to any process-based simulation model. Topics include background on classical discrete event simulation and techniques for random variate generation and statistics gathering to support simulation.

  16. Euthanasia

    USGS Publications Warehouse

    Franson, J.C.

    1999-01-01

    Euthanasia means to cause humane death. Some current euthanasia techniques may become unacceptable over time and be replaced by new techniques as more data are gathered and evaluated. The following information and recommendations are based largely on the 1993 report of the American Veterinary Medical Association (AVMA) Panel on Euthanasia. The recommendations in the panel report were intended to serve as guidelines, and they require the use of professional judgement for specific situations. Ultimately, it is the responsibility of those persons carrying out euthanasia to assure that it is done in the most humane manner possible.

  17. A survey of techniques for architecting and managing GPU register file

    DOE PAGES

    Mittal, Sparsh

    2016-04-07

    To support their massively-multithreaded architecture, GPUs use very large register file (RF) which has a capacity higher than even L1 and L2 caches. In total contrast, traditional CPUs use tiny RF and much larger caches to optimize latency. Due to these differences, along with the crucial impact of RF in determining GPU performance, novel and intelligent techniques are required for managing GPU RF. In this paper, we survey the techniques for designing and managing GPU RF. We discuss techniques related to performance, energy and reliability aspects of RF. To emphasize the similarities and differences between the techniques, we classify themmore » along several parameters. Lastly, the aim of this paper is to synthesize the state-of-art developments in RF management and also stimulate further research in this area.« less

  18. ELM mitigation techniques

    NASA Astrophysics Data System (ADS)

    Evans, T. E.

    2013-07-01

    Large edge-localized mode (ELM) control techniques must be developed to help ensure the success of burning and ignited fusion plasma devices such as tokamaks and stellarators. In full performance ITER tokamak discharges, with QDT = 10, the energy released by a single ELM could reach ˜30 MJ which is expected to result in an energy density of 10-15 MJ/m2on the divertor targets. This will exceed the estimated divertor ablation limit by a factor of 20-30. A worldwide research program is underway to develop various types of ELM control techniques in preparation for ITER H-mode plasma operations. An overview of the ELM control techniques currently being developed is discussed along with the requirements for applying these techniques to plasmas in ITER. Particular emphasis is given to the primary approaches, pellet pacing and resonant magnetic perturbation fields, currently being considered for ITER.

  19. A survey of techniques for architecting and managing GPU register file

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh

    To support their massively-multithreaded architecture, GPUs use very large register file (RF) which has a capacity higher than even L1 and L2 caches. In total contrast, traditional CPUs use tiny RF and much larger caches to optimize latency. Due to these differences, along with the crucial impact of RF in determining GPU performance, novel and intelligent techniques are required for managing GPU RF. In this paper, we survey the techniques for designing and managing GPU RF. We discuss techniques related to performance, energy and reliability aspects of RF. To emphasize the similarities and differences between the techniques, we classify themmore » along several parameters. Lastly, the aim of this paper is to synthesize the state-of-art developments in RF management and also stimulate further research in this area.« less

  20. Simultaneous multigrid techniques for nonlinear eigenvalue problems: Solutions of the nonlinear Schrödinger-Poisson eigenvalue problem in two and three dimensions

    NASA Astrophysics Data System (ADS)

    Costiner, Sorin; Ta'asan, Shlomo

    1995-07-01

    Algorithms for nonlinear eigenvalue problems (EP's) often require solving self-consistently a large number of EP's. Convergence difficulties may occur if the solution is not sought in an appropriate region, if global constraints have to be satisfied, or if close or equal eigenvalues are present. Multigrid (MG) algorithms for nonlinear problems and for EP's obtained from discretizations of partial differential EP have often been shown to be more efficient than single level algorithms. This paper presents MG techniques and a MG algorithm for nonlinear Schrödinger Poisson EP's. The algorithm overcomes the above mentioned difficulties combining the following techniques: a MG simultaneous treatment of the eigenvectors and nonlinearity, and with the global constrains; MG stable subspace continuation techniques for the treatment of nonlinearity; and a MG projection coupled with backrotations for separation of solutions. These techniques keep the solutions in an appropriate region, where the algorithm converges fast, and reduce the large number of self-consistent iterations to only a few or one MG simultaneous iteration. The MG projection makes it possible to efficiently overcome difficulties related to clusters of close and equal eigenvalues. Computational examples for the nonlinear Schrödinger-Poisson EP in two and three dimensions, presenting special computational difficulties that are due to the nonlinearity and to the equal and closely clustered eigenvalues are demonstrated. For these cases, the algorithm requires O(qN) operations for the calculation of q eigenvectors of size N and for the corresponding eigenvalues. One MG simultaneous cycle per fine level was performed. The total computational cost is equivalent to only a few Gauss-Seidel relaxations per eigenvector. An asymptotic convergence rate of 0.15 per MG cycle is attained.

  1. A Ruby in the Rubbish: Beneficial Mutations, Deleterious Mutations and the Evolution of Sex

    PubMed Central

    Peck, J. R.

    1994-01-01

    This study presents a mathematical model in which a single beneficial mutation arises in a very large population that is subject to frequent deleterious mutations. The results suggest that, if the population is sexual, then the deleterious mutations will have little effect on the ultimate fate of the beneficial mutation. However, if most offspring are produced asexually, then the probability that the beneficial mutation will be lost from the population may be greatly enhanced by the deleterious mutations. Thus, sexual populations may adapt much more quickly than populations where most reproduction is asexual. Some of the results were produced using computer simulation methods, and a technique was developed that allows treatment of arbitrarily large numbers of individuals in a reasonable amount of computer time. This technique may be of prove useful for the analysis of a wide variety of models, though there are some constraints on its applicability. For example, the technique requires that reproduction can be described by Poisson processes. PMID:8070669

  2. Anticipation of the landing shock phenomenon in flight simulation

    NASA Technical Reports Server (NTRS)

    Mcfarland, Richard E.

    1987-01-01

    An aircraft landing may be described as a controlled crash because a runway surface is intercepted. In a simulation model the transition from aerodynamic flight to weight on wheels involves a single computational cycle during which stiff differential equations are activated; with a significant probability these initial conditions are unrealistic. This occurs because of the finite cycle time, during which large restorative forces will accompany unrealistic initial oleo compressions. This problem was recognized a few years ago at Ames Research Center during simulation studies of a supersonic transport. The mathematical model of this vehicle severely taxed computational resources, and required a large cycle time. The ground strike problem was solved by a described technique called anticipation equations. This extensively used technique has not been previously reported. The technique of anticipating a significant event is a useful tool in the general field of discrete flight simulation. For the differential equations representing a landing gear model stiffness, rate of interception and cycle time may combine to produce an unrealistic simulation of the continuum.

  3. Research on subsurface defects of potassium dihydrogen phosphate crystals fabricated by single point diamond turning technique

    NASA Astrophysics Data System (ADS)

    Tie, Guipeng; Dai, Yifan; Guan, Chaoliang; Chen, Shaoshan; Song, Bing

    2013-03-01

    Potassium dihydrogen phosphate (KDP) crystals, which are widely used in high-power laser systems, are required to be free of defects on fabricated subsurfaces. The depth of subsurface defects (SSD) of KDP crystals is significantly influenced by the parameters used in the single point diamond turning technique. In this paper, based on the deliquescent magnetorheological finishing technique, the SSD of KDP crystals is observed and the depths under various cutting parameters are detected and discussed. The results indicate that no SSD is generated under small parameters and with the increase of cutting parameters, SSD appears and the depth rises almost linearly. Although the ascending trends of SSD depths caused by cutting depth and feed rate are much alike, the two parameters make different contributions. Taking the same material removal efficiency as a criterion, a large cutting depth generates shallower SSD depth than a large feed rate. Based on the experiment results, an optimized cutting procedure is obtained to generate defect-free surfaces.

  4. Inquiry-based experiments for large-scale introduction to PCR and restriction enzyme digests.

    PubMed

    Johanson, Kelly E; Watt, Terry J

    2015-01-01

    Polymerase chain reaction and restriction endonuclease digest are important techniques that should be included in all Biochemistry and Molecular Biology laboratory curriculums. These techniques are frequently taught at an advanced level, requiring many hours of student and faculty time. Here we present two inquiry-based experiments that are designed for introductory laboratory courses and combine both techniques. In both approaches, students must determine the identity of an unknown DNA sequence, either a gene sequence or a primer sequence, based on a combination of PCR product size and restriction digest pattern. The experimental design is flexible, and can be adapted based on available instructor preparation time and resources, and both approaches can accommodate large numbers of students. We implemented these experiments in our courses with a combined total of 584 students and have an 85% success rate. Overall, students demonstrated an increase in their understanding of the experimental topics, ability to interpret the resulting data, and proficiency in general laboratory skills. © 2015 The International Union of Biochemistry and Molecular Biology.

  5. Development of beryllium honeycomb sandwich composite for structural and other related applications

    NASA Technical Reports Server (NTRS)

    Vogan, J. W.; Grant, L. A.

    1972-01-01

    The feasibility of fabricating large beryllium honeycomb panels was demonstrated. Both flat and curved sandwich structures were manufactured using practical, braze bonding techniques. The processes developed prove that metallurgically assembled beryllium honeycomb panels show decided potential where rigid, lightweight structures are required. Three panels, each 10 square feet in surface area, were fabricated, and radiographically inspected to determine integrity. This examination revealed a 97 percent braze in the final panel. It is believed that ceramic dies for forming and brazing would facilitate the fabrication techniques for higher production rates. Ceramic dies would yield a lower thermal gradient in the panel during the braze cycle. This would eliminate the small amount of face sheet wrinkling present in the panels. Hot forming the various panel components demonstrated efficient manufacturing techniques for scaling up and producing large numbers of hot formed beryllium components and panels. The beryllium honeycomb panel demonstrated very good vibrational loading characteristics under test with desirable damping characteristics.

  6. Development of a Two-Stage Microalgae Dewatering Process – A Life Cycle Assessment Approach

    PubMed Central

    Soomro, Rizwan R.; Zeng, Xianhai; Lu, Yinghua; Lin, Lu; Danquah, Michael K.

    2016-01-01

    Even though microalgal biomass is leading the third generation biofuel research, significant effort is required to establish an economically viable commercial-scale microalgal biofuel production system. Whilst a significant amount of work has been reported on large-scale cultivation of microalgae using photo-bioreactors and pond systems, research focus on establishing high performance downstream dewatering operations for large-scale processing under optimal economy is limited. The enormous amount of energy and associated cost required for dewatering large-volume microalgal cultures has been the primary hindrance to the development of the needed biomass quantity for industrial-scale microalgal biofuels production. The extremely dilute nature of large-volume microalgal suspension and the small size of microalgae cells in suspension create a significant processing cost during dewatering and this has raised major concerns towards the economic success of commercial-scale microalgal biofuel production as an alternative to conventional petroleum fuels. This article reports an effective framework to assess the performance of different dewatering technologies as the basis to establish an effective two-stage dewatering system. Bioflocculation coupled with tangential flow filtration (TFF) emerged a promising technique with total energy input of 0.041 kWh, 0.05 kg CO2 emissions and a cost of $ 0.0043 for producing 1 kg of microalgae biomass. A streamlined process for operational analysis of two-stage microalgae dewatering technique, encompassing energy input, carbon dioxide emission, and process cost, is presented. PMID:26904075

  7. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.

    PubMed

    Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T

    2017-01-01

    Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.

  8. Indirect addressing and load balancing for faster solution to Mandelbrot Set on SIMD architectures

    NASA Technical Reports Server (NTRS)

    Tomboulian, Sherryl

    1989-01-01

    SIMD computers with local indirect addressing allow programs to have queues and buffers, making certain kinds of problems much more efficient. Examined here are a class of problems characterized by computations on data points where the computation is identical, but the convergence rate is data dependent. Normally, in this situation, the algorithm time is governed by the maximum number of iterations required by each point. Using indirect addressing allows a processor to proceed to the next data point when it is done, reducing the overall number of iterations required to approach the mean convergence rate when a sufficiently large problem set is solved. Load balancing techniques can be applied for additional performance improvement. Simulations of this technique applied to solving Mandelbrot Sets indicate significant performance gains.

  9. Deposition of zinc sulfide thin films by chemical bath process

    NASA Astrophysics Data System (ADS)

    Oladeji, Isaiah O.; Chow, Lee

    1996-11-01

    Deposition of high quality zinc sulfide (ZnS) thin film over a large area is required if it is to be effectively used in electroluminescent devices, solar cells, and other optoelectronic devices. Of all deposition techniques, chemical bath deposition (CBD) is the least costly technique that meets the above requirements. Recently it is found that the growth of ZnS film, of thickness less than 100 nm in a single dip, by CBD is facilitated by the use of ammonia and hydrazine as complexing agents. Here we report that the thickness of the deposited ZnS film can be increased if ammonium salt is used as a buffer. We also present an analytical study to explain our results and to further understand the ZnS growth process in CBD.

  10. Active Mirror Predictive and Requirements Verification Software (AMP-ReVS)

    NASA Technical Reports Server (NTRS)

    Basinger, Scott A.

    2012-01-01

    This software is designed to predict large active mirror performance at various stages in the fabrication lifecycle of the mirror. It was developed for 1-meter class powered mirrors for astronomical purposes, but is extensible to other geometries. The package accepts finite element model (FEM) inputs and laboratory measured data for large optical-quality mirrors with active figure control. It computes phenomenological contributions to the surface figure error using several built-in optimization techniques. These phenomena include stresses induced in the mirror by the manufacturing process and the support structure, the test procedure, high spatial frequency errors introduced by the polishing process, and other process-dependent deleterious effects due to light-weighting of the mirror. Then, depending on the maturity of the mirror, it either predicts the best surface figure error that the mirror will attain, or it verifies that the requirements for the error sources have been met once the best surface figure error has been measured. The unique feature of this software is that it ties together physical phenomenology with wavefront sensing and control techniques and various optimization methods including convex optimization, Kalman filtering, and quadratic programming to both generate predictive models and to do requirements verification. This software combines three distinct disciplines: wavefront control, predictive models based on FEM, and requirements verification using measured data in a robust, reusable code that is applicable to any large optics for ground and space telescopes. The software also includes state-of-the-art wavefront control algorithms that allow closed-loop performance to be computed. It allows for quantitative trade studies to be performed for optical systems engineering, including computing the best surface figure error under various testing and operating conditions. After the mirror manufacturing process and testing have been completed, the software package can be used to verify that the underlying requirements have been met.

  11. A Large number of fast cosmological simulations

    NASA Astrophysics Data System (ADS)

    Koda, Jun; Kazin, E.; Blake, C.

    2014-01-01

    Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.

  12. Middle Atmosphere Program. Handbook for MAP, volume 9

    NASA Technical Reports Server (NTRS)

    Bowhill, S. A. (Editor); Edwards, B. (Editor)

    1983-01-01

    The term Mesosphere-Stratosphere-Troposphere radar (MST) was invented to describe the use of a high power radar transmitter together with a large vertically, or near vertically, pointing antenna to study the dynamics and structure of the atmosphere from about 10 to 100 km, using the very weak coherently scattered radiation returned from small scale irregularities in refractive index. Nine topics were addressed including: meteorological and dynamic requirements for MST radar networks; interpretation of radar returns for clear air; techniques for the measurement of horizontal and vertical velocities; techniques for studying gravity waves and turbulence; capabilities and limitations of existing MST radar; design considerations for high power VHF radar transceivers; optimum radar antenna configurations; and data analysis techniques.

  13. Ultra-low field nuclear magnetic resonance and magnetic resonance imaging to discriminate and identify materials

    DOEpatents

    Kraus, Robert H.; Matlashov, Andrei N.; Espy, Michelle A.; Volegov, Petr L.

    2010-03-30

    An ultra-low magnetic field NMR system can non-invasively examine containers. Database matching techniques can then identify hazardous materials within the containers. Ultra-low field NMR systems are ideal for this purpose because they do not require large powerful magnets and because they can examine materials enclosed in conductive shells such as lead shells. The NMR examination technique can be combined with ultra-low field NMR imaging, where an NMR image is obtained and analyzed to identify target volumes. Spatial sensitivity encoding can also be used to identify target volumes. After the target volumes are identified the NMR measurement technique can be used to identify their contents.

  14. Designing for Change: Minimizing the Impact of Changing Requirements in the Later Stages of a Spaceflight Software Project

    NASA Technical Reports Server (NTRS)

    Allen, B. Danette

    1998-01-01

    In the traditional 'waterfall' model of the software project life cycle, the Requirements Phase ends and flows into the Design Phase, which ends and flows into the Development Phase. Unfortunately, the process rarely, if ever, works so smoothly in practice. Instead, software developers often receive new requirements, or modifications to the original requirements, well after the earlier project phases have been completed. In particular, projects with shorter than ideal schedules are highly susceptible to frequent requirements changes, as the software requirements analysis phase is often forced to begin before the overall system requirements and top-level design are complete. This results in later modifications to the software requirements, even though the software design and development phases may be complete. Requirements changes received in the later stages of a software project inevitably lead to modification of existing developed software. Presented here is a series of software design techniques that can greatly reduce the impact of last-minute requirements changes. These techniques were successfully used to add built-in flexibility to two complex software systems in which the requirements were expected to (and did) change frequently. These large, real-time systems were developed at NASA Langley Research Center (LaRC) to test and control the Lidar In-Space Technology Experiment (LITE) instrument which flew aboard the space shuttle Discovery as the primary payload on the STS-64 mission.

  15. Feature combinations and the divergence criterion

    NASA Technical Reports Server (NTRS)

    Decell, H. P., Jr.; Mayekar, S. M.

    1976-01-01

    Classifying large quantities of multidimensional remotely sensed agricultural data requires efficient and effective classification techniques and the construction of certain transformations of a dimension reducing, information preserving nature. The construction of transformations that minimally degrade information (i.e., class separability) is described. Linear dimension reducing transformations for multivariate normal populations are presented. Information content is measured by divergence.

  16. An Eye Oximeter for Combat Casualty Care

    DTIC Science & Technology

    1999-12-01

    monitoring (e.g., retinitis pigmentosa , retinal detachment, macular degeneration, diabetic retinopathy, etc.). Recently, scanning laser microscopy...and UAH is generating instrumentation and scientific data suggesting that retinal vessel oxygen saturations (both arterial and venous) may be used...and developing the techniques required to accurately measure retinal large vessel oxygen saturations. As this work is being accomplished we have used

  17. Overcoming the Glassy-Eyed Nod: An Application of Process-Oriented Guided Inquiry Learning Techniques in Information Technology

    ERIC Educational Resources Information Center

    Myers, Trina; Monypenny, Richard; Trevathan, Jarrod

    2012-01-01

    Two significant problems faced by universities are to ensure sustainability and to produce quality graduates. Four aspects of these problems are to improve engagement, to foster interaction, develop required skills and to effectively gauge the level of attention and comprehension within lectures and large tutorials. Process-Oriented Guided Inquiry…

  18. The use of PacBio and Hi-C data in denovo assembly of the goat genome

    USDA-ARS?s Scientific Manuscript database

    Generating de novo reference genome assemblies for non-model organisms is a laborious task that often requires a large amount of data from several sequencing platforms and cytogenetic surveys. By using PacBio sequence data and new library creation techniques, we present a de novo, high quality refer...

  19. A Simplified Method for Tissue Engineering Skeletal Muscle Organoids in Vitro

    NASA Technical Reports Server (NTRS)

    Shansky, Janet; DelTatto, Michael; Chromiak, Joseph; Vandenburgh, Herman

    1996-01-01

    Tissue-engineered three dimensional skeletal muscle organ-like structures have been formed in vitro from primary myoblasts by several different techniques. This report describes a simplified method for generating large numbers of muscle organoids from either primary embryonic avian or neonatal rodent myoblasts, which avoids the requirements for stretching and other mechanical stimulation.

  20. A Distributive, Non-Destructive, Real-Time Approach to Snowpack Monitoring

    NASA Technical Reports Server (NTRS)

    Frolik, Jeff; Skalka, Christian

    2012-01-01

    This invention is designed to ascertain the snow water equivalence (SWE) of snowpacks with better spatial and temporal resolutions than present techniques. The approach is ground-based, as opposed to some techniques that are air-based. In addition, the approach is compact, non-destructive, and can be communicated with remotely, and thus can be deployed in areas not possible with current methods. Presently there are two principal ground-based techniques for obtaining SWE measurements. The first is manual snow core measurements of the snowpack. This approach is labor-intensive, destructive, and has poor temporal resolution. The second approach is to deploy a large (e.g., 3x3 m) snowpillow, which requires significant infrastructure, is potentially hazardous [uses a approximately equal to 200-gallon (approximately equal to 760-L) antifreeze-filled bladder], and requires deployment in a large, flat area. High deployment costs necessitate few installations, thus yielding poor spatial resolution of data. Both approaches have limited usefulness in complex and/or avalanche-prone terrains. This approach is compact, non-destructive to the snowpack, provides high temporal resolution data, and due to potential low cost, can be deployed with high spatial resolution. The invention consists of three primary components: a robust wireless network and computing platform designed for harsh climates, new SWE sensing strategies, and algorithms for smart sampling, data logging, and SWE computation.

  1. Efficient 3D inversions using the Richards equation

    NASA Astrophysics Data System (ADS)

    Cockett, Rowan; Heagy, Lindsey J.; Haber, Eldad

    2018-07-01

    Fluid flow in the vadose zone is governed by the Richards equation; it is parameterized by hydraulic conductivity, which is a nonlinear function of pressure head. Investigations in the vadose zone typically require characterizing distributed hydraulic properties. Water content or pressure head data may include direct measurements made from boreholes. Increasingly, proxy measurements from hydrogeophysics are being used to supply more spatially and temporally dense data sets. Inferring hydraulic parameters from such datasets requires the ability to efficiently solve and optimize the nonlinear time domain Richards equation. This is particularly important as the number of parameters to be estimated in a vadose zone inversion continues to grow. In this paper, we describe an efficient technique to invert for distributed hydraulic properties in 1D, 2D, and 3D. Our technique does not store the Jacobian matrix, but rather computes its product with a vector. Existing literature for the Richards equation inversion explicitly calculates the sensitivity matrix using finite difference or automatic differentiation, however, for large scale problems these methods are constrained by computation and/or memory. Using an implicit sensitivity algorithm enables large scale inversion problems for any distributed hydraulic parameters in the Richards equation to become tractable on modest computational resources. We provide an open source implementation of our technique based on the SimPEG framework, and show it in practice for a 3D inversion of saturated hydraulic conductivity using water content data through time.

  2. GOES-R SUVI EUV Flatfields Generated Using Boustrophedon Scans

    NASA Astrophysics Data System (ADS)

    Shing, L.; Edwards, C.; Mathur, D.; Vasudevan, G.; Shaw, M.; Nwachuku, C.

    2017-12-01

    The Solar Ultraviolet Imager (SUVI) is mounted on the Solar Pointing Platform (SPP) of the Geostationary Operational Environmental Satellite, GOES-R. SUVI is a Generalized Cassegrain telescope with a large field of view that employs multilayer coatings optimized to operate in six extreme ultraviolet (EUV) narrow bandpasses centered at 9.4, 13.1, 17.1, 19.5, 28.4 and 30.4 nm. The SUVI CCD flatfield response was determined using two different techniques; The Kuhn-Lin-Lorentz (KLL) Raster and a new technique called, Dynamic Boustrophedon Scans. The new technique requires less time to collect the data and is also less sensitive to Solar features compared with the KLL method. This paper presents the flatfield results of the SUVI using this technique during Post Launch Testing (PLT).

  3. User modeling techniques for enhanced usability of OPSMODEL operations simulation software

    NASA Technical Reports Server (NTRS)

    Davis, William T.

    1991-01-01

    The PC based OPSMODEL operations software for modeling and simulation of space station crew activities supports engineering and cost analyses and operations planning. Using top-down modeling, the level of detail required in the data base can be limited to being commensurate with the results required of any particular analysis. To perform a simulation, a resource environment consisting of locations, crew definition, equipment, and consumables is first defined. Activities to be simulated are then defined as operations and scheduled as desired. These operations are defined within a 1000 level priority structure. The simulation on OPSMODEL, then, consists of the following: user defined, user scheduled operations executing within an environment of user defined resource and priority constraints. Techniques for prioritizing operations to realistically model a representative daily scenario of on-orbit space station crew activities are discussed. The large number of priority levels allows priorities to be assigned commensurate with the detail necessary for a given simulation. Several techniques for realistic modeling of day-to-day work carryover are also addressed.

  4. The relationship between wave and geometrical optics models of coded aperture type x-ray phase contrast imaging systems

    PubMed Central

    Munro, Peter R.T.; Ignatyev, Konstantin; Speller, Robert D.; Olivo, Alessandro

    2013-01-01

    X-ray phase contrast imaging is a very promising technique which may lead to significant advancements in medical imaging. One of the impediments to the clinical implementation of the technique is the general requirement to have an x-ray source of high coherence. The radiation physics group at UCL is currently developing an x-ray phase contrast imaging technique which works with laboratory x-ray sources. Validation of the system requires extensive modelling of relatively large samples of tissue. To aid this, we have undertaken a study of when geometrical optics may be employed to model the system in order to avoid the need to perform a computationally expensive wave optics calculation. In this paper, we derive the relationship between the geometrical and wave optics model for our system imaging an infinite cylinder. From this model we are able to draw conclusions regarding the general applicability of the geometrical optics approximation. PMID:20389424

  5. The relationship between wave and geometrical optics models of coded aperture type x-ray phase contrast imaging systems.

    PubMed

    Munro, Peter R T; Ignatyev, Konstantin; Speller, Robert D; Olivo, Alessandro

    2010-03-01

    X-ray phase contrast imaging is a very promising technique which may lead to significant advancements in medical imaging. One of the impediments to the clinical implementation of the technique is the general requirement to have an x-ray source of high coherence. The radiation physics group at UCL is currently developing an x-ray phase contrast imaging technique which works with laboratory x-ray sources. Validation of the system requires extensive modelling of relatively large samples of tissue. To aid this, we have undertaken a study of when geometrical optics may be employed to model the system in order to avoid the need to perform a computationally expensive wave optics calculation. In this paper, we derive the relationship between the geometrical and wave optics model for our system imaging an infinite cylinder. From this model we are able to draw conclusions regarding the general applicability of the geometrical optics approximation.

  6. Iterative approach as alternative to S-matrix in modal methods

    NASA Astrophysics Data System (ADS)

    Semenikhin, Igor; Zanuccoli, Mauro

    2014-12-01

    The continuously increasing complexity of opto-electronic devices and the rising demands of simulation accuracy lead to the need of solving very large systems of linear equations making iterative methods promising and attractive from the computational point of view with respect to direct methods. In particular, iterative approach potentially enables the reduction of required computational time to solve Maxwell's equations by Eigenmode Expansion algorithms. Regardless of the particular eigenmodes finding method used, the expansion coefficients are computed as a rule by scattering matrix (S-matrix) approach or similar techniques requiring order of M3 operations. In this work we consider alternatives to the S-matrix technique which are based on pure iterative or mixed direct-iterative approaches. The possibility to diminish the impact of M3 -order calculations to overall time and in some cases even to reduce the number of arithmetic operations to M2 by applying iterative techniques are discussed. Numerical results are illustrated to discuss validity and potentiality of the proposed approaches.

  7. Reduced-order modeling for hyperthermia: an extended balanced-realization-based approach.

    PubMed

    Mattingly, M; Bailey, E A; Dutton, A W; Roemer, R B; Devasia, S

    1998-09-01

    Accurate thermal models are needed in hyperthermia cancer treatments for such tasks as actuator and sensor placement design, parameter estimation, and feedback temperature control. The complexity of the human body produces full-order models which are too large for effective execution of these tasks, making use of reduced-order models necessary. However, standard balanced-realization (SBR)-based model reduction techniques require a priori knowledge of the particular placement of actuators and sensors for model reduction. Since placement design is intractable (computationally) on the full-order models, SBR techniques must use ad hoc placements. To alleviate this problem, an extended balanced-realization (EBR)-based model-order reduction approach is presented. The new technique allows model order reduction to be performed over all possible placement designs and does not require ad hoc placement designs. It is shown that models obtained using the EBR method are more robust to intratreatment changes in the placement of the applied power field than those models obtained using the SBR method.

  8. Maximizing Macromolecule Crystal Size for Neutron Diffraction Experiments

    NASA Technical Reports Server (NTRS)

    Judge, R. A.; Kephart, R.; Leardi, R.; Myles, D. A.; Snell, E. H.; vanderWoerd, M.; Curreri, Peter A. (Technical Monitor)

    2002-01-01

    A challenge in neutron diffraction experiments is growing large (greater than 1 cu mm) macromolecule crystals. In taking up this challenge we have used statistical experiment design techniques to quickly identify crystallization conditions under which the largest crystals grow. These techniques provide the maximum information for minimal experimental effort, allowing optimal screening of crystallization variables in a simple experimental matrix, using the minimum amount of sample. Analysis of the results quickly tells the investigator what conditions are the most important for the crystallization. These can then be used to maximize the crystallization results in terms of reducing crystal numbers and providing large crystals of suitable habit. We have used these techniques to grow large crystals of Glucose isomerase. Glucose isomerase is an industrial enzyme used extensively in the food industry for the conversion of glucose to fructose. The aim of this study is the elucidation of the enzymatic mechanism at the molecular level. The accurate determination of hydrogen positions, which is critical for this, is a requirement that neutron diffraction is uniquely suited for. Preliminary neutron diffraction experiments with these crystals conducted at the Institute Laue-Langevin (Grenoble, France) reveal diffraction to beyond 2.5 angstrom. Macromolecular crystal growth is a process involving many parameters, and statistical experimental design is naturally suited to this field. These techniques are sample independent and provide an experimental strategy to maximize crystal volume and habit for neutron diffraction studies.

  9. Recovery of Multilayer-Coated Zerodur and ULE Optics for Extreme-Ultraviolet Lithography by Recoating, Reactive-Ion Etching, and Wet-Chemical Processes.

    PubMed

    Mirkarimi, P B; Baker, S L; Montcalm, C; Folta, J A

    2001-01-01

    Extreme-ultraviolet lithography requires expensive multilayer-coated Zerodur or ULE optics with extremely tight figure and finish specifications. Therefore it is desirable to develop methods to recover these optics if they are coated with a nonoptimum multilayer films or in the event that the coating deteriorates over time owing to long-term exposure to radiation, corrosion, or surface contamination. We evaluate recoating, reactive-ion etching, and wet-chemical techniques for the recovery of Mo/Si and Mo/Be multilayer films upon Zerodur and ULE test optics. The recoating technique was successfully employed in the recovery of Mo/Si-coated optics but has the drawback of limited applicability. A chlorine-based reactive-ion etch process was successfully used to recover Mo/Si-coated optics, and a particularly large process window was observed when ULE optics were employed; this is an advantageous for large, curved optics. Dilute HCl wet-chemical techniques were developed and successfully demonstrated for the recovery of Mo/Be-coated optics as well as for Mo/Si-coated optics when Mo/Be release layers were employed; however, there are questions about the extendability of the HCl process to large optics and multiple coat and strip cycles. The technique of using carbon barrier layers to protect the optic during removal of Mo/Si in HF:HNO(3) also showed promise.

  10. Robust Requirements Tracing via Internet Search Technology: Improving an IV and V Technique. Phase 2

    NASA Technical Reports Server (NTRS)

    Hayes, Jane; Dekhtyar, Alex

    2004-01-01

    There are three major objectives to this phase of the work. (1) Improvement of Information Retrieval (IR) methods for Independent Verification and Validation (IV&V) requirements tracing. Information Retrieval methods are typically developed for very large (order of millions - tens of millions and more documents) document collections and therefore, most successfully used methods somewhat sacrifice precision and recall in order to achieve efficiency. At the same time typical IR systems treat all user queries as independent of each other and assume that relevance of documents to queries is subjective for each user. The IV&V requirements tracing problem has a much smaller data set to operate on, even for large software development projects; the set of queries is predetermined by the high-level specification document and individual requirements considered as query input to IR methods are not necessarily independent from each other. Namely, knowledge about the links for one requirement may be helpful in determining the links of another requirement. Finally, while the final decision on the exact form of the traceability matrix still belongs to the IV&V analyst, his/her decisions are much less arbitrary than those of an Internet search engine user. All this suggests that the information available to us in the framework of the IV&V tracing problem can be successfully leveraged to enhance standard IR techniques, which in turn would lead to increased recall and precision. We developed several new methods during Phase II; (2) IV&V requirements tracing IR toolkit. Based on the methods developed in Phase I and their improvements developed in Phase II, we built a toolkit of IR methods for IV&V requirements tracing. The toolkit has been integrated, at the data level, with SAIC's SuperTracePlus (STP) tool; (3) Toolkit testing. We tested the methods included in the IV&V requirements tracing IR toolkit on a number of projects.

  11. An analytical method to simulate the H I 21-cm visibility signal for intensity mapping experiments

    NASA Astrophysics Data System (ADS)

    Sarkar, Anjan Kumar; Bharadwaj, Somnath; Marthi, Visweshwar Ram

    2018-01-01

    Simulations play a vital role in testing and validating H I 21-cm power spectrum estimation techniques. Conventional methods use techniques like N-body simulations to simulate the sky signal which is then passed through a model of the instrument. This makes it necessary to simulate the H I distribution in a large cosmological volume, and incorporate both the light-cone effect and the telescope's chromatic response. The computational requirements may be particularly large if one wishes to simulate many realizations of the signal. In this paper, we present an analytical method to simulate the H I visibility signal. This is particularly efficient if one wishes to simulate a large number of realizations of the signal. Our method is based on theoretical predictions of the visibility correlation which incorporate both the light-cone effect and the telescope's chromatic response. We have demonstrated this method by applying it to simulate the H I visibility signal for the upcoming Ooty Wide Field Array Phase I.

  12. Methods, caveats and the future of large-scale microelectrode recordings in the non-human primate

    PubMed Central

    Dotson, Nicholas M.; Goodell, Baldwin; Salazar, Rodrigo F.; Hoffman, Steven J.; Gray, Charles M.

    2015-01-01

    Cognitive processes play out on massive brain-wide networks, which produce widely distributed patterns of activity. Capturing these activity patterns requires tools that are able to simultaneously measure activity from many distributed sites with high spatiotemporal resolution. Unfortunately, current techniques with adequate coverage do not provide the requisite spatiotemporal resolution. Large-scale microelectrode recording devices, with dozens to hundreds of microelectrodes capable of simultaneously recording from nearly as many cortical and subcortical areas, provide a potential way to minimize these tradeoffs. However, placing hundreds of microelectrodes into a behaving animal is a highly risky and technically challenging endeavor that has only been pursued by a few groups. Recording activity from multiple electrodes simultaneously also introduces several statistical and conceptual dilemmas, such as the multiple comparisons problem and the uncontrolled stimulus response problem. In this perspective article, we discuss some of the techniques that we, and others, have developed for collecting and analyzing large-scale data sets, and address the future of this emerging field. PMID:26578906

  13. Interactive Visualization of Large-Scale Hydrological Data using Emerging Technologies in Web Systems and Parallel Programming

    NASA Astrophysics Data System (ADS)

    Demir, I.; Krajewski, W. F.

    2013-12-01

    As geoscientists are confronted with increasingly massive datasets from environmental observations to simulations, one of the biggest challenges is having the right tools to gain scientific insight from the data and communicate the understanding to stakeholders. Recent developments in web technologies make it easy to manage, visualize and share large data sets with general public. Novel visualization techniques and dynamic user interfaces allow users to interact with data, and modify the parameters to create custom views of the data to gain insight from simulations and environmental observations. This requires developing new data models and intelligent knowledge discovery techniques to explore and extract information from complex computational simulations or large data repositories. Scientific visualization will be an increasingly important component to build comprehensive environmental information platforms. This presentation provides an overview of the trends and challenges in the field of scientific visualization, and demonstrates information visualization and communication tools developed within the light of these challenges.

  14. The effect of technology advancements on the comparative advantages of electric versus chemical propulsion for a large cargo orbit transfer vehicle

    NASA Technical Reports Server (NTRS)

    Rehder, J. J.; Wurster, K. E.

    1978-01-01

    Techniques for sizing electrically or chemically propelled orbit transfer vehicles and analyzing fleet requirements are used in a comparative analysis of the two concepts for various levels of traffic to geosynchronous orbit. The vehicle masses, fuel requirements, and fleet sizes are determined and translated into launch vehicle payload requirements. Technology projections beyond normal growth are made and their effect on the comparative advantages of the concepts is determined. A preliminary cost analysis indicates that although electric propulsion greatly reduces launch vehicle requirements substantial improvements in the cost and reusability of power systems must occur to make an electrically propelled vehicle competitive.

  15. LDR cryogenics

    NASA Technical Reports Server (NTRS)

    Nast, T.

    1988-01-01

    A brief summary from the 1985 Large Deployable Reflector (LDR) Asilomar 2 workshop of the requirements for LDR cryogenic cooling is presented. The heat rates are simply the sum of the individual heat rates from the instruments. Consideration of duty cycle will have a dramatic effect on cooling requirements. There are many possible combinations of cooling techniques for each of the three temperatures zones. It is clear that much further system study is needed to determine what type of cooling system is required (He-2, hybrid or mechanical) and what size and power is required. As the instruments, along with their duty cycles and heat rates, become better defined it will be possible to better determine the optimum cooling systems.

  16. Physically-enhanced data visualisation: towards real time solution of Partial Differential Equations in 3D domains

    NASA Astrophysics Data System (ADS)

    Zlotnik, Sergio

    2017-04-01

    Information provided by visualisation environments can be largely increased if the data shown is combined with some relevant physical processes and the used is allowed to interact with those processes. This is particularly interesting in VR environments where the user has a deep interplay with the data. For example, a geological seismic line in a 3D "cave" shows information of the geological structure of the subsoil. The available information could be enhanced with the thermal state of the region under study, with water-flow patterns in porous rocks or with rock displacements under some stress conditions. The information added by the physical processes is usually the output of some numerical technique applied to solve a Partial Differential Equation (PDE) that describes the underlying physics. Many techniques are available to obtain numerical solutions of PDE (e.g. Finite Elements, Finite Volumes, Finite Differences, etc). Although, all these traditional techniques require very large computational resources (particularly in 3D), making them useless in a real time visualization environment -such as VR- because the time required to compute a solution is measured in minutes or even in hours. We present here a novel alternative for the resolution of PDE-based problems that is able to provide a 3D solutions for a very large family of problems in real time. That is, the solution is evaluated in a one thousands of a second, making the solver ideal to be embedded into VR environments. Based on Model Order Reduction ideas, the proposed technique divides the computational work in to a computationally intensive "offline" phase, that is run only once in a life time, and an "online" phase that allow the real time evaluation of any solution within a family of problems. Preliminary examples of real time solutions of complex PDE-based problems will be presented, including thermal problems, flow problems, wave problems and some simple coupled problems.

  17. A Single-Molecule Barcoding System using Nanoslits for DNA Analysis

    NASA Astrophysics Data System (ADS)

    Jo, Kyubong; Schramm, Timothy M.; Schwartz, David C.

    Single DNA molecule approaches are playing an increasingly central role in the analytical genomic sciences because single molecule techniques intrinsically provide individualized measurements of selected molecules, free from the constraints of bulk techniques, which blindly average noise and mask the presence of minor analyte components. Accordingly, a principal challenge that must be addressed by all single molecule approaches aimed at genome analysis is how to immobilize and manipulate DNA molecules for measurements that foster construction of large, biologically relevant data sets. For meeting this challenge, this chapter discusses an integrated approach for microfabricated and nanofabricated devices for the manipulation of elongated DNA molecules within nanoscale geometries. Ideally, large DNA coils stretch via nanoconfinement when channel dimensions are within tens of nanometers. Importantly, stretched, often immobilized, DNA molecules spanning hundreds of kilobase pairs are required by all analytical platforms working with large genomic substrates because imaging techniques acquire sequence information from molecules that normally exist in free solution as unrevealing random coils resembling floppy balls of yarn. However, nanoscale devices fabricated with sufficiently small dimensions fostering molecular stretching make these devices impractical because of the requirement of exotic fabrication technologies, costly materials, and poor operational efficiencies. In this chapter, such problems are addressed by discussion of a new approach to DNA presentation and analysis that establishes scaleable nanoconfinement conditions through reduction of ionic strength; stiffening DNA molecules thus enabling their arraying for analysis using easily fabricated devices that can also be mass produced. This new approach to DNA nanoconfinement is complemented by the development of a novel labeling scheme for reliable marking of individual molecules with fluorochrome labels, creating molecular barcodes, which are efficiently read using fluorescence resonance energy transfer techniques for minimizing noise from unincorporated labels. As such, our integrative approach for the realization of genomic analysis through nanoconfinement, named nanocoding, was demonstrated through the barcoding and mapping of bacterial artificial chromosomal molecules, thereby providing the basis for a high-throughput platform competent for whole genome investigations.

  18. How big is too big or how many partners are needed to build a large project which still can be managed successfully?

    NASA Astrophysics Data System (ADS)

    Henkel, Daniela; Eisenhauer, Anton

    2017-04-01

    During the last decades, the number of large research projects has increased and therewith the requirement for multidisciplinary, multisectoral collaboration. Such complex and large-scale projects pose new competencies to form, manage, and use large, diverse teams as a competitive advantage. For complex projects the effort is magnified because multiple large international research consortia involving academic and non-academic partners, including big industries, NGOs, private and public bodies, all with cultural differences, individually discrepant expectations on teamwork and differences in the collaboration between national and multi-national administrations and research organisations, challenge the organisation and management of such multi-partner research consortia. How many partners are needed to establish and conduct collaboration with a multidisciplinary and multisectoral approach? How much personnel effort and what kinds of management techniques are required for such projects. This presentation identifies advantages and challenges of large research projects based on the experiences made in the context of an Innovative Training Network (ITN) project within Marie Skłodowska-Curie Actions of the European HORIZON 2020 program. Possible strategies are discussed to circumvent and avoid conflicts already at the beginning of the project.

  19. Flat-plate solar array project. Volume 6: Engineering sciences and reliability

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.; Smokler, M. I.

    1986-01-01

    The Flat-Plate Solar Array (FSA) Project activities directed at developing the engineering technology base required to achieve modules that meet the functional, safety, and reliability requirements of large scale terrestrial photovoltaic systems applications are reported. These activities included: (1) development of functional, safety, and reliability requirements for such applications; (2) development of the engineering analytical approaches, test techniques, and design solutions required to meet the requirements; (3) synthesis and procurement of candidate designs for test and evaluation; and (4) performance of extensive testing, evaluation, and failure analysis of define design shortfalls and, thus, areas requiring additional research and development. A summary of the approach and technical outcome of these activities are provided along with a complete bibliography of the published documentation covering the detailed accomplishments and technologies developed.

  20. Application of augmented-Lagrangian methods in meteorology: Comparison of different conjugate-gradient codes for large-scale minimization

    NASA Technical Reports Server (NTRS)

    Navon, I. M.

    1984-01-01

    A Lagrange multiplier method using techniques developed by Bertsekas (1982) was applied to solving the problem of enforcing simultaneous conservation of the nonlinear integral invariants of the shallow water equations on a limited area domain. This application of nonlinear constrained optimization is of the large dimensional type and the conjugate gradient method was found to be the only computationally viable method for the unconstrained minimization. Several conjugate-gradient codes were tested and compared for increasing accuracy requirements. Robustness and computational efficiency were the principal criteria.

  1. Development of a large support surface for an air-bearing type zero-gravity simulator

    NASA Technical Reports Server (NTRS)

    Glover, K. E.

    1976-01-01

    The methods used in producing a large, flat surface to serve as the supporting surface for an air-bearing type zero-gravity simulator using low clearance, thrust-pad type air bearings are described. Major problems encountered in the use of self-leveled epoxy coatings in this surface are discussed and techniques are recommended which proved effective in overcoming these problems. Performance requirements of the zero-gravity simulator vehicle which were pertinent to the specification of the air-bearing support surface are also discussed.

  2. An Extension of Holographic Moiré to Micromechanics

    NASA Astrophysics Data System (ADS)

    Sciammarella, C. A.; Sciammarella, F. M.

    The electronic Holographic Moiré is an ideal tool for micromechanics studies. It does not require a modification of the surface by the introduction of a reference grating. This is of particular advantage when dealing with materials such as solid propellant grains whose chemical nature and surface finish makes the application of a reference grating very difficult. Traditional electronic Holographic Moiré presents some difficult problems when large magnifications are needed and large rigid body motion takes place. This paper presents developments that solves these problems and extends the application of the technique to micromechanics.

  3. Inventory and analysis of natural vegetation and related resources from space and high altitude photography

    NASA Technical Reports Server (NTRS)

    Poulton, C. E.

    1972-01-01

    A multiple sampling technique was developed whereby spacecraft photographs supported by aircraft photographs could be used to quantify plant communities. Large scale (1:600 to 1:2,400) color infrared aerial photographs were required to identify shrub and herbaceous species. These photos were used to successfully estimate a herbaceous standing crop biomass. Microdensitometry was used to discriminate among specific plant communities and individual plant species. Large scale infrared photography was also used to estimate mule deer deaths and population density of northern pocket gophers.

  4. Optimizing tertiary storage organization and access for spatio-temporal datasets

    NASA Technical Reports Server (NTRS)

    Chen, Ling Tony; Rotem, Doron; Shoshani, Arie; Drach, Bob; Louis, Steve; Keating, Meridith

    1994-01-01

    We address in this paper data management techniques for efficiently retrieving requested subsets of large datasets stored on mass storage devices. This problem represents a major bottleneck that can negate the benefits of fast networks, because the time to access a subset from a large dataset stored on a mass storage system is much greater that the time to transmit that subset over a network. This paper focuses on very large spatial and temporal datasets generated by simulation programs in the area of climate modeling, but the techniques developed can be applied to other applications that deal with large multidimensional datasets. The main requirement we have addressed is the efficient access of subsets of information contained within much larger datasets, for the purpose of analysis and interactive visualization. We have developed data partitioning techniques that partition datasets into 'clusters' based on analysis of data access patterns and storage device characteristics. The goal is to minimize the number of clusters read from mass storage systems when subsets are requested. We emphasize in this paper proposed enhancements to current storage server protocols to permit control over physical placement of data on storage devices. We also discuss in some detail the aspects of the interface between the application programs and the mass storage system, as well as a workbench to help scientists to design the best reorganization of a dataset for anticipated access patterns.

  5. Sample presentation, sources of error and future perspectives on the application of vibrational spectroscopy in the wine industry.

    PubMed

    Cozzolino, Daniel

    2015-03-30

    Vibrational spectroscopy encompasses a number of techniques and methods including ultra-violet, visible, Fourier transform infrared or mid infrared, near infrared and Raman spectroscopy. The use and application of spectroscopy generates spectra containing hundreds of variables (absorbances at each wavenumbers or wavelengths), resulting in the production of large data sets representing the chemical and biochemical wine fingerprint. Multivariate data analysis techniques are then required to handle the large amount of data generated in order to interpret the spectra in a meaningful way in order to develop a specific application. This paper focuses on the developments of sample presentation and main sources of error when vibrational spectroscopy methods are applied in wine analysis. Recent and novel applications will be discussed as examples of these developments. © 2014 Society of Chemical Industry.

  6. An Improved Wake Vortex Tracking Algorithm for Multiple Aircraft

    NASA Technical Reports Server (NTRS)

    Switzer, George F.; Proctor, Fred H.; Ahmad, Nashat N.; LimonDuparcmeur, Fanny M.

    2010-01-01

    The accurate tracking of vortex evolution from Large Eddy Simulation (LES) data is a complex and computationally intensive problem. The vortex tracking requires the analysis of very large three-dimensional and time-varying datasets. The complexity of the problem is further compounded by the fact that these vortices are embedded in a background turbulence field, and they may interact with the ground surface. Another level of complication can arise, if vortices from multiple aircrafts are simulated. This paper presents a new technique for post-processing LES data to obtain wake vortex tracks and wake intensities. The new approach isolates vortices by defining "regions of interest" (ROI) around each vortex and has the ability to identify vortex pairs from multiple aircraft. The paper describes the new methodology for tracking wake vortices and presents application of the technique for single and multiple aircraft.

  7. Satellite SAR interferometric techniques applied to emergency mapping

    NASA Astrophysics Data System (ADS)

    Stefanova Vassileva, Magdalena; Riccardi, Paolo; Lecci, Daniele; Giulio Tonolo, Fabio; Boccardo Boccardo, Piero; Chiesa, Giuliana; Angeluccetti, Irene

    2017-04-01

    This paper aim to investigate the capabilities of the currently available SAR interferometric algorithms in the field of emergency mapping. Several tests have been performed exploiting the Copernicus Sentinel-1 data using the COTS software ENVI/SARscape 5.3. Emergency Mapping can be defined as "creation of maps, geo-information products and spatial analyses dedicated to providing situational awareness emergency management and immediate crisis information for response by means of extraction of reference (pre-event) and crisis (post-event) geographic information/data from satellite or aerial imagery". The conventional differential SAR interferometric technique (DInSAR) and the two currently available multi-temporal SAR interferometric approaches, i.e. Permanent Scatterer Interferometry (PSI) and Small BAseline Subset (SBAS), have been applied to provide crisis information useful for the emergency management activities. Depending on the considered Emergency Management phase, it may be distinguished between rapid mapping, i.e. fast provision of geospatial data regarding the area affected for the immediate emergency response, and monitoring mapping, i.e. detection of phenomena for risk prevention and mitigation activities. In order to evaluate the potential and limitations of the aforementioned SAR interferometric approaches for the specific rapid and monitoring mapping application, five main factors have been taken into account: crisis information extracted, input data required, processing time and expected accuracy. The results highlight that DInSAR has the capacity to delineate areas affected by large and sudden deformations and fulfills most of the immediate response requirements. The main limiting factor of interferometry is the availability of suitable SAR acquisition immediately after the event (e.g. Sentinel-1 mission characterized by 6-day revisiting time may not always satisfy the immediate emergency request). PSI and SBAS techniques are suitable to produce monitoring maps for risk prevention and mitigation purposes. Nevertheless, multi-temporal techniques require large SAR temporal datasets, i.e. 20 and more images. Being the Sentinel-1 missions operational only since April 2014, multi-mission SAR datasets should be therefore exploited to carry out historical analysis.

  8. Can global navigation satellite system signals reveal the ecological attributes of forests?

    NASA Astrophysics Data System (ADS)

    Liu, Jingbin; Hyyppä, Juha; Yu, Xiaowei; Jaakkola, Anttoni; Liang, Xinlian; Kaartinen, Harri; Kukko, Antero; Zhu, Lingli; Wang, Yunsheng; Hyyppä, Hannu

    2016-08-01

    Forests have important impacts on the global carbon cycle and climate, and they are also related to a wide range of industrial sectors. Currently, one of the biggest challenges in forestry research is effectively and accurately measuring and monitoring forest variables, as the exploitation potential of forest inventory products largely depends on the accuracy of estimates and on the cost of data collection. A low-cost crowdsourcing solution is needed for forest inventory to collect forest variables. Here, we propose global navigation satellite system (GNSS) signals as a novel type of observables for predicting forest attributes and show the feasibility of utilizing GNSS signals for estimating important attributes of forest plots, including mean tree height, mean diameter at breast height, basal area, stem volume and tree biomass. The prediction accuracies of the proposed technique were better in boreal forest conditions than those of the conventional techniques of 2D remote sensing. More importantly, this technique provides a novel, cost-effective way of collecting large-scale forest measurements in the crowdsourcing context. This technique can be applied by, for example, harvesters or persons hiking or working in forests because GNSS devices are widely used, and the field operation of this technique is simple and does not require professional forestry skills.

  9. Soil organic matter composition from correlated thermal analysis and nuclear magnetic resonance data in Australian national inventory of agricultural soils

    NASA Astrophysics Data System (ADS)

    Moore, T. S.; Sanderman, J.; Baldock, J.; Plante, A. F.

    2016-12-01

    National-scale inventories typically include soil organic carbon (SOC) content, but not chemical composition or biogeochemical stability. Australia's Soil Carbon Research Programme (SCaRP) represents a national inventory of SOC content and composition in agricultural systems. The program used physical fractionation followed by 13C nuclear magnetic resonance (NMR) spectroscopy. While these techniques are highly effective, they are typically too expensive and time consuming for use in large-scale SOC monitoring. We seek to understand if analytical thermal analysis is a viable alternative. Coupled differential scanning calorimetry (DSC) and evolved gas analysis (CO2- and H2O-EGA) yields valuable data on SOC composition and stability via ramped combustion. The technique requires little training to use, and does not require fractionation or other sample pre-treatment. We analyzed 300 agricultural samples collected by SCaRP, divided into four fractions: whole soil, coarse particulates (POM), untreated mineral associated (HUM), and hydrofluoric acid (HF)-treated HUM. All samples were analyzed by DSC-EGA, but only the POM and HF-HUM fractions were analyzed by NMR. Multivariate statistical analyses were used to explore natural clustering in SOC composition and stability based on DSC-EGA data. A partial least-squares regression (PLSR) model was used to explore correlations among the NMR and DSC-EGA data. Correlations demonstrated regions of combustion attributable to specific functional groups, which may relate to SOC stability. We are increasingly challenged with developing an efficient technique to assess SOC composition and stability at large spatial and temporal scales. Correlations between NMR and DSC-EGA may demonstrate the viability of using thermal analysis in lieu of more demanding methods in future large-scale surveys, and may provide data that goes beyond chemical composition to better approach quantification of biogeochemical stability.

  10. A Fast Projection-Based Algorithm for Clustering Big Data.

    PubMed

    Wu, Yun; He, Zhiquan; Lin, Hao; Zheng, Yufei; Zhang, Jingfen; Xu, Dong

    2018-06-07

    With the fast development of various techniques, more and more data have been accumulated with the unique properties of large size (tall) and high dimension (wide). The era of big data is coming. How to understand and discover new knowledge from these data has attracted more and more scholars' attention and has become the most important task in data mining. As one of the most important techniques in data mining, clustering analysis, a kind of unsupervised learning, could group a set data into objectives(clusters) that are meaningful, useful, or both. Thus, the technique has played very important role in knowledge discovery in big data. However, when facing the large-sized and high-dimensional data, most of the current clustering methods exhibited poor computational efficiency and high requirement of computational source, which will prevent us from clarifying the intrinsic properties and discovering the new knowledge behind the data. Based on this consideration, we developed a powerful clustering method, called MUFOLD-CL. The principle of the method is to project the data points to the centroid, and then to measure the similarity between any two points by calculating their projections on the centroid. The proposed method could achieve linear time complexity with respect to the sample size. Comparison with K-Means method on very large data showed that our method could produce better accuracy and require less computational time, demonstrating that the MUFOLD-CL can serve as a valuable tool, at least may play a complementary role to other existing methods, for big data clustering. Further comparisons with state-of-the-art clustering methods on smaller datasets showed that our method was fastest and achieved comparable accuracy. For the convenience of most scholars, a free soft package was constructed.

  11. A versatile technique for capturing urban gulls during winter

    USGS Publications Warehouse

    Clark, Daniel E.; Koenen, Kiana K. G.; MacKenzie, Kenneth G.; Pereira, Jillian W.; DeStefano, Stephen

    2014-01-01

    The capture of birds is a common part of many avian studies but often requires large investments of time and resources. We developed a novel technique for capturing gulls during the non-breeding season using a net launcher that was effective and efficient. The technique can be used in a variety of habitats and situations, including urban areas. Using this technique, we captured 1,326 gulls in 125 capture events from 2008 to 2012 in Massachusetts, USA. On average, 10 ring-billed gulls (Larus delawarensis; range = 1–37) were captured per trapping event. Capture rate (the number of birds captured per trapping event) was influenced by the type of bait used and also the time of the year (greatest in autumn, lowest in winter). Our capture technique could be adapted to catch a variety of urban or suburban birds and mammals that are attracted to bait.

  12. Deuterium dilution technique for body composition assessment: resolving methodological issues in children with moderate acute malnutrition.

    PubMed

    Fabiansen, Christian; Yaméogo, Charles W; Devi, Sarita; Friis, Henrik; Kurpad, Anura; Wells, Jonathan C

    2017-08-01

    Childhood malnutrition is highly prevalent and associated with high mortality risk. In observational and interventional studies among malnourished children, body composition is increasingly recognised as a key outcome. The deuterium dilution technique has generated high-quality data on body composition in studies of infants and young children in several settings, but its feasibility and accuracy in children suffering from moderate acute malnutrition requires further study. Prior to a large nutritional intervention trial among children with moderate acute malnutrition, we conducted pilot work to develop and adapt the deuterium dilution technique. We refined procedures for administration of isotope doses and collection of saliva. Furthermore, we established that equilibration time in local context is 3 h. These findings and the resulting standard operating procedures are important to improve data quality when using the deuterium dilution technique in malnutrition studies in field conditions, and may encourage a wider use of isotope techniques.

  13. Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.

    1997-01-01

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for constructing partitioned response surfaces is developed to reduce the computational expense of experimentation for fitting models in a large number of factors. Noise modeling techniques are compared and recommendations are offered for the implementation of robust design when approximate models are sought. These techniques, approaches, and recommendations are incorporated within the method developed for hierarchical robust preliminary design exploration. This method as well as the associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system. The case study is developed in collaboration with Allison Engine Company, Rolls Royce Aerospace, and is based on the Allison AE3007 existing engine designed for midsize commercial, regional business jets. For this case study, the turbofan system-level problem is partitioned into engine cycle design and configuration design and a compressor modules integrated for more detailed subsystem-level design exploration, improving system evaluation. The fan and low pressure turbine subsystems are also modeled, but in less detail. Given the defined partitioning, these subproblems are investigated independently and concurrently, and response surface models are constructed to approximate the responses of each. These response models are then incorporated within a commercial turbofan hierarchical compromise decision support problem formulation. Five design scenarios are investigated, and robust solutions are identified. The method and solutions identified are verified by comparison with the AE3007 engine. The solutions obtained are similar to the AE3007 cycle and configuration, but are better with respect to many of the requirements.

  14. Large numbers hypothesis. II - Electromagnetic radiation

    NASA Technical Reports Server (NTRS)

    Adams, P. J.

    1983-01-01

    This paper develops the theory of electromagnetic radiation in the units covariant formalism incorporating Dirac's large numbers hypothesis (LNH). A direct field-to-particle technique is used to obtain the photon propagation equation which explicitly involves the photon replication rate. This replication rate is fixed uniquely by requiring that the form of a free-photon distribution function be preserved, as required by the 2.7 K cosmic radiation. One finds that with this particular photon replication rate the units covariant formalism developed in Paper I actually predicts that the ratio of photon number to proton number in the universe varies as t to the 1/4, precisely in accord with LNH. The cosmological red-shift law is also derived and it is shown to differ considerably from the standard form of (nu)(R) - const.

  15. Fluorescence Fluctuation Approaches to the Study of Adhesion and Signaling

    PubMed Central

    Bachir, Alexia I.; Kubow, Kristopher E.; Horwitz, Alan R.

    2013-01-01

    Cell–matrix adhesions are large, multimolecular complexes through which cells sense and respond to their environment. They also mediate migration by serving as traction points and signaling centers and allow the cell to modify the surroucnding tissue. Due to their fundamental role in cell behavior, adhesions are germane to nearly all major human health pathologies. However, adhesions are extremely complex and dynamic structures that include over 100 known interacting proteins and operate over multiple space (nm–µm) and time (ms–min) regimes. Fluorescence fluctuation techniques are well suited for studying adhesions. These methods are sensitive over a large spatiotemporal range and provide a wealth of information including molecular transport dynamics, interactions, and stoichiometry from a single time series. Earlier chapters in this volume have provided the theoretical background, instrumentation, and analysis algorithms for these techniques. In this chapter, we discuss their implementation in living cells to study adhesions in migrating cells. Although each technique and application has its own unique instrumentation and analysis requirements, we provide general guidelines for sample preparation, selection of imaging instrumentation, and optimization of data acquisition and analysis parameters. Finally, we review several recent studies that implement these techniques in the study of adhesions. PMID:23280111

  16. Field test to intercompare carbon monoxide, nitric oxide and hydroxyl instrumentation at Wallops Island, Virginia

    NASA Technical Reports Server (NTRS)

    Gregory, Gerald L.; Beck, Sherwin M.; Bendura, Richard J.

    1987-01-01

    Documentation of the first of three instrument intercomparisons conducted as part of NASA Global Tropospheric Experiment/Chemical Instrumentation Test and Evaluation (GTE/CITE-1) is given. This ground-based intercomparison was conducted during July 1983 at NASA Wallops Flight Facility. Instruments intercompared included one laser system and three grab-sample approaches for CO; two chemiluminescent systems and one laser-induced fluorescent (LIF) technique for NO; and two different LIF systems and a radiochemical tracer technique for OH. The major objectives of this intercomparison was to intercompare ambient measurements of CO, NO, and OH at a common site by using techniques of fundamentally different detection principles and to identify any major biases among the techniques prior to intercomparison on an aircraft platform. Included in the report are comprehensive discussions of workshop requirements, philosophies, and operations as well as intercomparison analyses and results. In addition, the large body of nonintercomparison data incorporated into the workshop measurements is summarized. The report is an important source document for those interested in conducting similar large and complex intercomparison tests as well as those interested in using the data base for purposes other than instrument intercomparison.

  17. Hydrologic considerations for estimation of storage-capacity requirements of impounding and side-channel reservoirs for water supply in Ohio

    USGS Publications Warehouse

    Koltun, G.F.

    2001-01-01

    This report provides data and methods to aid in the hydrologic design or evaluation of impounding reservoirs and side-channel reservoirs used for water supply in Ohio. Data from 117 streamflow-gaging stations throughout Ohio were analyzed by means of nonsequential-mass-curve-analysis techniques to develop relations between storage requirements, water demand, duration, and frequency. Information also is provided on minimum runoff for selected durations and frequencies. Systematic record lengths for the streamflow-gaging stations ranged from about 10 to 75 years; however, in many cases, additional streamflow record was synthesized. For impounding reservoirs, families of curves are provided to facilitate the estimation of storage requirements as a function of demand and the ratio of the 7-day, 2-year low flow to the mean annual flow. Information is provided with which to evaluate separately the effects of evaporation on storage requirements. Comparisons of storage requirements for impounding reservoirs determined by nonsequential-mass-curve-analysis techniques with storage requirements determined by annual-mass-curve techniques that employ probability routing to account for carryover-storage requirements indicate that large differences in computed required storages can result from the two methods, particularly for conditions where demand cannot be met from within-year storage. For side-channel reservoirs, tables of demand-storage-frequency information are provided for a primary pump relation consisting of one variable-speed pump with a pumping capacity that ranges from 0.1 to 20 times demand. Tables of adjustment ratios are provided to facilitate determination of storage requirements for 19 other pump sets consisting of assorted combinations of fixed-speed pumps or variable-speed pumps with aggregate pumping capacities smaller than or equal to the primary pump relation. The effects of evaporation on side-channel reservoir storage requirements are incorporated into the storage-requirement estimates. The effects of an instream-flow requirement equal to the 80-percent-duration flow are also incorporated into the storage-requirement estimates.

  18. Optimizing focal plane electric field estimation for detecting exoplanets

    NASA Astrophysics Data System (ADS)

    Groff, T.; Kasdin, N. J.; Riggs, A. J. E.

    Detecting extrasolar planets with angular separations and contrast levels similar to Earth requires a large space-based observatory and advanced starlight suppression techniques. This paper focuses on techniques employing an internal coronagraph, which is highly sensitive to optical errors and must rely on focal plane wavefront control techniques to achieve the necessary contrast levels. To maximize the available science time for a coronagraphic mission we demonstrate an estimation scheme using a discrete time Kalman filter. The state estimate feedback inherent to the filter allows us to minimize the number of exposures required to estimate the electric field. We also show progress including a bias estimate into the Kalman filter to eliminate incoherent light from the estimate. Since the exoplanets themselves are incoherent to the star, this has the added benefit of using the control history to gain certainty in the location of exoplanet candidates as the signal-to-noise between the planets and speckles improves. Having established a purely focal plane based wavefront estimation technique, we discuss a sensor fusion concept where alternate wavefront sensors feedforward a time update to the focal plane estimate to improve robustness to time varying speckle. The overall goal of this work is to reduce the time required for wavefront control on a target, thereby improving the observatory's planet detection performance by increasing the number of targets reachable during the lifespan of the mission.

  19. Lunar Habitat Optimization Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    SanScoucie, M. P.; Hull, P. V.; Tinker, M. L.; Dozier, G. V.

    2007-01-01

    Long-duration surface missions to the Moon and Mars will require bases to accommodate habitats for the astronauts. Transporting the materials and equipment required to build the necessary habitats is costly and difficult. The materials chosen for the habitat walls play a direct role in protection against each of the mentioned hazards. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Clearly, an optimization method is warranted for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat wall design tool utilizing genetic algorithms (GAs) has been developed. GAs use a "survival of the fittest" philosophy where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multiobjective formulation of up-mass, heat loss, structural analysis, meteoroid impact protection, and radiation protection. This Technical Publication presents the research and development of this tool as well as a technique for finding the optimal GA search parameters.

  20. Future US energy demands based upon traditional consumption patterns lead to requirements which significantly exceed domestic supply

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Energy consumption in the United States has risen in response to both increasing population and to increasing levels of affluence. Depletion of domestic energy reserves requires consumption modulation, production of fossil fuels, more efficient conversion techniques, and large scale transitions to non-fossile fuel energy sources. Widening disparity between the wealthy and poor nations of the world contributes to trends that increase the likelihood of group action by the lesser developed countries to achieve political and economic goals. The formation of anticartel cartels is envisioned.

  1. Habitat Design Optimization and Analysis

    NASA Technical Reports Server (NTRS)

    SanSoucie, Michael P.; Hull, Patrick V.; Tinker, Michael L.

    2006-01-01

    Long-duration surface missions to the Moon and Mars will require habitats for the astronauts. The materials chosen for the habitat walls play a direct role in the protection against the harsh environments found on the surface. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Advanced optimization techniques are necessary for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat design optimization tool utilizing genetic algorithms has been developed. Genetic algorithms use a "survival of the fittest" philosophy, where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multi-objective formulation of structural analysis, heat loss, radiation protection, and meteoroid protection. This paper presents the research and development of this tool.

  2. Technique for Very High Order Nonlinear Simulation and Validation

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.

    2001-01-01

    Finding the sources of sound in large nonlinear fields via direct simulation currently requires excessive computational cost. This paper describes a simple technique for efficiently solving the multidimensional nonlinear Euler equations that significantly reduces this cost and demonstrates a useful approach for validating high order nonlinear methods. Up to 15th order accuracy in space and time methods were compared and it is shown that an algorithm with a fixed design accuracy approaches its maximal utility and then its usefulness exponentially decays unless higher accuracy is used. It is concluded that at least a 7th order method is required to efficiently propagate a harmonic wave using the nonlinear Euler equations to a distance of 5 wavelengths while maintaining an overall error tolerance that is low enough to capture both the mean flow and the acoustics.

  3. An Automatic Segmentation Method Combining an Active Contour Model and a Classification Technique for Detecting Polycomb-group Proteinsin High-Throughput Microscopy Images.

    PubMed

    Gregoretti, Francesco; Cesarini, Elisa; Lanzuolo, Chiara; Oliva, Gennaro; Antonelli, Laura

    2016-01-01

    The large amount of data generated in biological experiments that rely on advanced microscopy can be handled only with automated image analysis. Most analyses require a reliable cell image segmentation eventually capable of detecting subcellular structures.We present an automatic segmentation method to detect Polycomb group (PcG) proteins areas isolated from nuclei regions in high-resolution fluorescent cell image stacks. It combines two segmentation algorithms that use an active contour model and a classification technique serving as a tool to better understand the subcellular three-dimensional distribution of PcG proteins in live cell image sequences. We obtained accurate results throughout several cell image datasets, coming from different cell types and corresponding to different fluorescent labels, without requiring elaborate adjustments to each dataset.

  4. Space processing of crystalline materials: A study of known methods of electrical characterization of semiconductors

    NASA Technical Reports Server (NTRS)

    Castle, J. G.

    1976-01-01

    A literature survey is presented covering nondestructive methods of electrical characterization of semiconductors. A synopsis of each technique deals with the applicability of the techniques to various device parameters and to potential in-flight use before, during, and after growth experiments on space flights. It is concluded that the very recent surge in the commercial production of large scale integrated circuitry and other semiconductor arrays requiring uniformity on the scale of a few microns, involves nondestructive test procedures which could well be useful to NASA for in-flight use in space processing.

  5. Development of lightweight graphite/polyimide sandwich panels. Phase 2: Thin gage material manufacture

    NASA Technical Reports Server (NTRS)

    Merlette, J. B.

    1972-01-01

    Thin gage materials selected and the rationale for their basic requirements are discussed. The resin used in all prepreg manufacture is Monsanto RS-6234 polyimide. The selected fiber for core manufacture is Hercules HT-S, and the selected fiber for face sheets is Hercules HM-S. The technique for making thin gage prepreg was to wind spread carbon fiber tows into a resin film on a large drum. This technique was found to be superior to others investigated. A total of 22 pounds of 1 to 2 mil/ply prepreg was fabricated for use on the program.

  6. Development of a Direct Fabrication Technique for Full-Shell X-Ray Optics

    NASA Technical Reports Server (NTRS)

    Gubarev, M.; Kolodziejczak, J. K.; Griffith, C.; Roche, J.; Smith, W. S.; Kester, T.; Atkins, C.; Arnold, W.; Ramsey, B.

    2016-01-01

    Future astrophysical missions will require fabrication technology capable of producing high angular resolution x-ray optics. A full-shell direct fabrication approach using modern robotic polishing machines has the potential for producing high resolution, light-weight and affordable x-ray mirrors that can be nested to produce large collecting area. This approach to mirror fabrication, based on the use of the metal substrates coated with nickel phosphorous alloy, is being pursued at MSFC. The design of the polishing fixtures for the direct fabrication, the surface figure metrology techniques used and the results of the polishing experiments are presented.

  7. A non-parametric, supervised classification of vegetation types on the Kaibab National Forest using decision trees

    Treesearch

    Suzanne M. Joy; R. M. Reich; Richard T. Reynolds

    2003-01-01

    Traditional land classification techniques for large areas that use Landsat Thematic Mapper (TM) imagery are typically limited to the fixed spatial resolution of the sensors (30m). However, the study of some ecological processes requires land cover classifications at finer spatial resolutions. We model forest vegetation types on the Kaibab National Forest (KNF) in...

  8. Clustering Patterns of Engagement in Massive Open Online Courses (MOOCs): The Use of Learning Analytics to Reveal Student Categories

    ERIC Educational Resources Information Center

    Khalil, Mohammad; Ebner, Martin

    2017-01-01

    Massive Open Online Courses (MOOCs) are remote courses that excel in their students' heterogeneity and quantity. Due to the peculiarity of being massiveness, the large datasets generated by MOOC platforms require advanced tools and techniques to reveal hidden patterns for purposes of enhancing learning and educational behaviors. This publication…

  9. Post-Fire Changes in Forest Biomass Retrieved by Airborne LiDAR in Amazonia

    Treesearch

    Luciane Sato; Vitor Gomes; Yosio Shimabukuro; Michael Keller; Egidio Arai; Maiza dos-Santos; Irving Brown; Luiz Aragão

    2016-01-01

    Fire is one of the main factors directly impacting Amazonian forest biomass and dynamics. Because of Amazonia’s large geographical extent, remote sensing techniques are required for comprehensively assessing forest fire impacts at the landscape level. In this context, Light Detection and Ranging (LiDAR) stands out as a technology capable of retrieving direct...

  10. Landscape-scale parameterization of a tree-level forest growth model: a k-nearest neighbor imputation approach incorporating LiDAR data

    Treesearch

    Michael J. Falkowski; Andrew T. Hudak; Nicholas L. Crookston; Paul E. Gessler; Edward H. Uebler; Alistair M. S. Smith

    2010-01-01

    Sustainable forest management requires timely, detailed forest inventory data across large areas, which is difficult to obtain via traditional forest inventory techniques. This study evaluated k-nearest neighbor imputation models incorporating LiDAR data to predict tree-level inventory data (individual tree height, diameter at breast height, and...

  11. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    NASA Astrophysics Data System (ADS)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  12. Experimental measurement of structural power flow on an aircraft fuselage

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.

    1991-01-01

    An experimental technique is used to measure structural intensity through an aircraft fuselage with an excitation load applied near one of the wing attachment locations. The fuselage is a relatively large structure, requiring a large number of measurement locations to analyze the whole of the structure. For the measurement of structural intensity, multiple point measurements are necessary at every location of interest. A tradeoff is therefore required between the number of measurement transducers, the mounting of these transducers, and the accuracy of the measurements. Using four transducers mounted on a bakelite platform, structural intensity vectors are measured at locations distributed throughout the fuselage. To minimize the errors associated with using the four transducer technique, the measurement locations are selected to be away from bulkheads and stiffeners. Furthermore, to eliminate phase errors between the four transducer measurements, two sets of data are collected for each position, with the orientation of the platform with the four transducers rotated by 180 degrees and an average taken between the two sets of data. The results of these measurements together with a discussion of the suitability of the approach for measuring structural intensity on a real structure are presented.

  13. [Multiple colonic anastomoses in the surgical treatment of short bowel syndrome. A new technique].

    PubMed

    Robledo-Ogazón, Felipe; Becerril-Martínez, Guillermo; Hernández-Saldaña, Víctor; Zavala-Aznar, Marí Luisa; Bojalil-Durán, Luis

    2008-01-01

    Some surgical pathologies eventually require intestinal resection. This may lead to an extended procedure such as leaving 30 cm of proximal jejunum and left and sigmoid colon. One of the most important consequences of this type of resection is "intestinal failure" or short bowel syndrome. This complex syndrome leads to different metabolic and water and acid/base imbalances, as well as nutritional and immunological challenges along with the problem accompanying an abdomen subjected to many surgical procedures and high mortality. Many surgical techniques have been developed to improve quality of life of patients. We designed a non-transplant surgical approach and performed the procedure on two patients with postoperative short bowel syndrome with <40 cm of proximal jejunum and left colon. There are a variety of non-transplant surgical procedures that, due to their complex technique or high mortality rate, have not resolved this important problem. However, the technique we present in this work can be performed by a large number of surgeons. The procedure has a low morbimortality rate and offers the opportunity for better control of metabolic and acid/base balance, intestinal transit and proper nutrition. We consider that this technique offers a new alternative for the complex management required by patients with short bowel syndrome and facilitates their long-term nutritional control.

  14. Investigation on experimental techniques to detect, locate and quantify gear noise in helicopter transmissions

    NASA Technical Reports Server (NTRS)

    Flanagan, P. M.; Atherton, W. J.

    1985-01-01

    A robotic system to automate the detection, location, and quantification of gear noise using acoustic intensity measurement techniques has been successfully developed. Major system components fabricated under this grant include an instrumentation robot arm, a robot digital control unit and system software. A commercial, desktop computer, spectrum analyzer and two microphone probe complete the equipment required for the Robotic Acoustic Intensity Measurement System (RAIMS). Large-scale acoustic studies of gear noise in helicopter transmissions cannot be performed accurately and reliably using presently available instrumentation and techniques. Operator safety is a major concern in certain gear noise studies due to the operating environment. The man-hours needed to document a noise field in situ is another shortcoming of present techniques. RAIMS was designed to reduce the labor and hazard in collecting data and to improve the accuracy and repeatability of characterizing the acoustic field by automating the measurement process. Using RAIMS a system operator can remotely control the instrumentation robot to scan surface areas and volumes generating acoustic intensity information using the two microphone technique. Acoustic intensity studies requiring hours of scan time can be performed automatically without operator assistance. During a scan sequence, the acoustic intensity probe is positioned by the robot and acoustic intensity data is collected, processed, and stored.

  15. Automatic scoring of dicentric chromosomes as a tool in large scale radiation accidents.

    PubMed

    Romm, H; Ainsbury, E; Barnard, S; Barrios, L; Barquinero, J F; Beinke, C; Deperas, M; Gregoire, E; Koivistoinen, A; Lindholm, C; Moquet, J; Oestreicher, U; Puig, R; Rothkamm, K; Sommer, S; Thierens, H; Vandersickel, V; Vral, A; Wojcik, A

    2013-08-30

    Mass casualty scenarios of radiation exposure require high throughput biological dosimetry techniques for population triage in order to rapidly identify individuals who require clinical treatment. The manual dicentric assay is a highly suitable technique, but it is also very time consuming and requires well trained scorers. In the framework of the MULTIBIODOSE EU FP7 project, semi-automated dicentric scoring has been established in six European biodosimetry laboratories. Whole blood was irradiated with a Co-60 gamma source resulting in 8 different doses between 0 and 4.5Gy and then shipped to the six participating laboratories. To investigate two different scoring strategies, cell cultures were set up with short term (2-3h) or long term (24h) colcemid treatment. Three classifiers for automatic dicentric detection were applied, two of which were developed specifically for these two different culture techniques. The automation procedure included metaphase finding, capture of cells at high resolution and detection of dicentric candidates. The automatically detected dicentric candidates were then evaluated by a trained human scorer, which led to the term 'semi-automated' being applied to the analysis. The six participating laboratories established at least one semi-automated calibration curve each, using the appropriate classifier for their colcemid treatment time. There was no significant difference between the calibration curves established, regardless of the classifier used. The ratio of false positive to true positive dicentric candidates was dose dependent. The total staff effort required for analysing 150 metaphases using the semi-automated approach was 2 min as opposed to 60 min for manual scoring of 50 metaphases. Semi-automated dicentric scoring is a useful tool in a large scale radiation accident as it enables high throughput screening of samples for fast triage of potentially exposed individuals. Furthermore, the results from the participating laboratories were comparable which supports networking between laboratories for this assay. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Strategies to overcome size and mechanical disadvantages in manual therapy

    PubMed Central

    Hazle, Charles R.; Lee, Matthew

    2016-01-01

    The practice of manual therapy (MT) is often difficult when providing care for large patients and for practitioners small in stature or with other physical limitations. Many MT techniques can be modified using simple principles to require less exertion, permitting consistency with standards of practice even in the presence of physical challenges. Commonly used MT techniques are herein described and demonstrated with alternative preparatory and movement methods, which can also be adopted for use in other techniques. These alternative techniques and the procedures used to adapt them warrant discussion among practitioners and educators in order to implement care, consistent with the best treatment evidence for many common musculoskeletal (MSK) conditions. The inclusion in educational curricula and MT training programs is recommended to enrich skill development in physical therapists (PTs), spanning entry-level practitioners to those pursuing advanced manual skills. PMID:27559282

  17. Applicability and Limitations of Reliability Allocation Methods

    NASA Technical Reports Server (NTRS)

    Cruz, Jose A.

    2016-01-01

    Reliability allocation process may be described as the process of assigning reliability requirements to individual components within a system to attain the specified system reliability. For large systems, the allocation process is often performed at different stages of system design. The allocation process often begins at the conceptual stage. As the system design develops, more information about components and the operating environment becomes available, different allocation methods can be considered. Reliability allocation methods are usually divided into two categories: weighting factors and optimal reliability allocation. When properly applied, these methods can produce reasonable approximations. Reliability allocation techniques have limitations and implied assumptions that need to be understood by system engineers. Applying reliability allocation techniques without understanding their limitations and assumptions can produce unrealistic results. This report addresses weighting factors, optimal reliability allocation techniques, and identifies the applicability and limitations of each reliability allocation technique.

  18. A special planning technique for stream-aquifer systems

    USGS Publications Warehouse

    Jenkins, C.T.; Taylor, O. James

    1974-01-01

    The potential effects of water-management plans on stream-aquifer systems in several countries have been simulated using electric-analog or digital-computer models. Many of the electric-analog models require large amounts of hardware preparation for each problem to be solved and some become so bulky that they present serious space and access problems. Digital-computer models require no special hardware preparation but often they require so many repetitive solutions of equations that they result in calculations that are unduly unwieldy and expensive, even on the latest generation of computers. Further, the more detailed digital models require a vast amount of core storage, leaving insufficient storage for evaluation of the many possible schemes of water-management. A concept introduced in 1968 by the senior author of this report offers a solution to these problems. The concept is that the effects on streamflow of ground-water withdrawal or recharge (stress) at any point in such a system can be approximated using two classical equations and a value of time that reflects the integrated effect of the following: irregular impermeable boundaries; stream meanders; aquifer properties and their areal variations; distance of the point from the stream; and imperfect hydraulic connection between the stream and the aquifer. The value of time is called the stream depletion factor (sdf). Results of a relatively few tests on detailed models can be summarized on maps showing lines through points of equal sdf. Sensitivity analyses of models of two large stream-aquifer systems in the State of Colorado show that the sdf technique described in this report provides results within tolerable ranges of error. The sdf technique is extremely versatile, allowing water managers to choose the degree of detail that best suits their needs and available computational hardware. Simple arithmetic, using, for example, only a slide rule and charts or tables of dimensionless values, will be sufficient for many calculations. If a large digital computer is available, detailed description of the system and its stresses will require only a fraction of the core storage, leaving the greater part of the storage available for sophisticated analyses, such as optimization. Once these analyses have been made, the model then is ready to perform its principal task--prediction of streamflow and changes in ground-water storage. In the two systems described in this report, direct diversion from the streams is the principal source of irrigation water, but it is supplemented by numerous wells. The streamflow depends largely on snowmelt. Estimates of both the amount and timing of runoff from snowmelt during the irrigation season are available on a monthly basis during the spring and early summer. These estimates become increasingly accurate as the season progresses, hence frequent changes of stress on the predictive model are necessary. The sdf technique is especially well suited to this purpose, because it is very easy to make such changes, resulting in more up-todate estimates of the availability of streamflow and ground-water storage. These estimates can be made for any time and any location in the system.

  19. Shape component analysis: structure-preserving dimension reduction on biological shape spaces.

    PubMed

    Lee, Hao-Chih; Liao, Tao; Zhang, Yongjie Jessica; Yang, Ge

    2016-03-01

    Quantitative shape analysis is required by a wide range of biological studies across diverse scales, ranging from molecules to cells and organisms. In particular, high-throughput and systems-level studies of biological structures and functions have started to produce large volumes of complex high-dimensional shape data. Analysis and understanding of high-dimensional biological shape data require dimension-reduction techniques. We have developed a technique for non-linear dimension reduction of 2D and 3D biological shape representations on their Riemannian spaces. A key feature of this technique is that it preserves distances between different shapes in an embedded low-dimensional shape space. We demonstrate an application of this technique by combining it with non-linear mean-shift clustering on the Riemannian spaces for unsupervised clustering of shapes of cellular organelles and proteins. Source code and data for reproducing results of this article are freely available at https://github.com/ccdlcmu/shape_component_analysis_Matlab The implementation was made in MATLAB and supported on MS Windows, Linux and Mac OS. geyang@andrew.cmu.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Clinical application of microsampling versus conventional sampling techniques in the quantitative bioanalysis of antibiotics: a systematic review.

    PubMed

    Guerra Valero, Yarmarly C; Wallis, Steven C; Lipman, Jeffrey; Stove, Christophe; Roberts, Jason A; Parker, Suzanne L

    2018-03-01

    Conventional sampling techniques for clinical pharmacokinetic studies often require the removal of large blood volumes from patients. This can result in a physiological or emotional burden, particularly for neonates or pediatric patients. Antibiotic pharmacokinetic studies are typically performed on healthy adults or general ward patients. These may not account for alterations to a patient's pathophysiology and can lead to suboptimal treatment. Microsampling offers an important opportunity for clinical pharmacokinetic studies in vulnerable patient populations, where smaller sample volumes can be collected. This systematic review provides a description of currently available microsampling techniques and an overview of studies reporting the quantitation and validation of antibiotics using microsampling. A comparison of microsampling to conventional sampling in clinical studies is included.

  1. The integration of system specifications and program coding

    NASA Technical Reports Server (NTRS)

    Luebke, W. R.

    1970-01-01

    Experience in maintaining up-to-date documentation for one module of the large-scale Medical Literature Analysis and Retrieval System 2 (MEDLARS 2) is described. Several innovative techniques were explored in the development of this system's data management environment, particularly those that use PL/I as an automatic documenter. The PL/I data description section can provide automatic documentation by means of a master description of data elements that has long and highly meaningful mnemonic names and a formalized technique for the production of descriptive commentary. The techniques discussed are practical methods that employ the computer during system development in a manner that assists system implementation, provides interim documentation for customer review, and satisfies some of the deliverable documentation requirements.

  2. a Spatiotemporal Aggregation Query Method Using Multi-Thread Parallel Technique Based on Regional Division

    NASA Astrophysics Data System (ADS)

    Liao, S.; Chen, L.; Li, J.; Xiong, W.; Wu, Q.

    2015-07-01

    Existing spatiotemporal database supports spatiotemporal aggregation query over massive moving objects datasets. Due to the large amounts of data and single-thread processing method, the query speed cannot meet the application requirements. On the other hand, the query efficiency is more sensitive to spatial variation then temporal variation. In this paper, we proposed a spatiotemporal aggregation query method using multi-thread parallel technique based on regional divison and implemented it on the server. Concretely, we divided the spatiotemporal domain into several spatiotemporal cubes, computed spatiotemporal aggregation on all cubes using the technique of multi-thread parallel processing, and then integrated the query results. By testing and analyzing on the real datasets, this method has improved the query speed significantly.

  3. Conically scanned lidar telescope using holographic optical elements

    NASA Technical Reports Server (NTRS)

    Schwemmer, Geary K.; Wilkerson, Thomas D.

    1992-01-01

    Holographic optical elements (HOE) using volume phase holograms make possible a new class of lightweight scanning telescopes having advantages for lidar remote sensing instruments. So far, the only application of HOE's to lidar has been a non-scanning receiver for a laser range finder. We introduce a large aperture, narrow field of view (FOV) telescope used in a conical scanning configuration, having a much smaller rotating mass than in conventional designs. Typically, lidars employ a large aperture collector and require a narrow FOV to limit the amount of skylight background. Focal plane techniques are not good approaches to scanning because they require a large FOV within which to scan a smaller FOV mirror or detector array. Thus, scanning lidar systems have either used a large flat scanning mirror at which the receiver telescope is pointed, or the entire telescope is steered. We present a concept for a conically scanned lidar telescope in which the only moving part is the HOE which serves as the primary collecting optic. We also describe methods by which a multiplexed HOE can be used simultaneously as a dichroic beamsplitter.

  4. A rapid method for the sampling of atmospheric water vapour for isotopic analysis.

    PubMed

    Peters, Leon I; Yakir, Dan

    2010-01-01

    Analysis of the stable isotopic composition of atmospheric moisture is widely applied in the environmental sciences. Traditional methods for obtaining isotopic compositional data from ambient moisture have required complicated sampling procedures, expensive and sophisticated distillation lines, hazardous consumables, and lengthy treatments prior to analysis. Newer laser-based techniques are expensive and usually not suitable for large-scale field campaigns, especially in cases where access to mains power is not feasible or high spatial coverage is required. Here we outline the construction and usage of a novel vapour-sampling system based on a battery-operated Stirling cycle cooler, which is simple to operate, does not require any consumables, or post-collection distillation, and is light-weight and highly portable. We demonstrate the ability of this system to reproduce delta(18)O isotopic compositions of ambient water vapour, with samples taken simultaneously by a traditional cryogenic collection technique. Samples were collected over 1 h directly into autosampler vials and were analysed by mass spectrometry after pyrolysis of 1 microL aliquots to CO. This yielded an average error of < +/-0.5 per thousand, approximately equal to the signal-to-noise ratio of traditional approaches. This new system provides a rapid and reliable alternative to conventional cryogenic techniques, particularly in cases requiring high sample throughput or where access to distillation lines, slurry maintenance or mains power is not feasible. Copyright 2009 John Wiley & Sons, Ltd.

  5. Use of a cocktail of spin traps for fingerprinting large range of free radicals in biological systems

    PubMed Central

    Marchand, Valérie; Charlier, Nicolas; Verrax, Julien; Buc-Calderon, Pedro; Levêque, Philippe; Gallez, Bernard

    2017-01-01

    It is well established that the formation of radical species centered on various atoms is involved in the mechanism leading to the development of several diseases or to the appearance of deleterious effects of toxic molecules. The detection of free radical is possible using Electron Paramagnetic Resonance (EPR) spectroscopy and the spin trapping technique. The classical EPR spin-trapping technique can be considered as a “hypothesis-driven” approach because it requires an a priori assumption regarding the nature of the free radical in order to select the most appropriate spin-trap. We here describe a “data-driven” approach using EPR and a cocktail of spin-traps. The rationale for using this cocktail was that it would cover a wide range of biologically relevant free radicals and have a large range of hydrophilicity and lipophilicity in order to trap free radicals produced in different cellular compartments. As a proof-of-concept, we validated the ability of the system to measure a large variety of free radicals (O-, N-, C-, or S- centered) in well characterized conditions, and we illustrated the ability of the technique to unambiguously detect free radical production in cells exposed to chemicals known to be radical-mediated toxic agents. PMID:28253308

  6. Use of a cocktail of spin traps for fingerprinting large range of free radicals in biological systems.

    PubMed

    Marchand, Valérie; Charlier, Nicolas; Verrax, Julien; Buc-Calderon, Pedro; Levêque, Philippe; Gallez, Bernard

    2017-01-01

    It is well established that the formation of radical species centered on various atoms is involved in the mechanism leading to the development of several diseases or to the appearance of deleterious effects of toxic molecules. The detection of free radical is possible using Electron Paramagnetic Resonance (EPR) spectroscopy and the spin trapping technique. The classical EPR spin-trapping technique can be considered as a "hypothesis-driven" approach because it requires an a priori assumption regarding the nature of the free radical in order to select the most appropriate spin-trap. We here describe a "data-driven" approach using EPR and a cocktail of spin-traps. The rationale for using this cocktail was that it would cover a wide range of biologically relevant free radicals and have a large range of hydrophilicity and lipophilicity in order to trap free radicals produced in different cellular compartments. As a proof-of-concept, we validated the ability of the system to measure a large variety of free radicals (O-, N-, C-, or S- centered) in well characterized conditions, and we illustrated the ability of the technique to unambiguously detect free radical production in cells exposed to chemicals known to be radical-mediated toxic agents.

  7. A new way to protect privacy in large-scale genome-wide association studies.

    PubMed

    Kamm, Liina; Bogdanov, Dan; Laur, Sven; Vilo, Jaak

    2013-04-01

    Increased availability of various genotyping techniques has initiated a race for finding genetic markers that can be used in diagnostics and personalized medicine. Although many genetic risk factors are known, key causes of common diseases with complex heritage patterns are still unknown. Identification of such complex traits requires a targeted study over a large collection of data. Ideally, such studies bring together data from many biobanks. However, data aggregation on such a large scale raises many privacy issues. We show how to conduct such studies without violating privacy of individual donors and without leaking the data to third parties. The presented solution has provable security guarantees. Supplementary data are available at Bioinformatics online.

  8. Conventional and dense gas techniques for the production of liposomes: a review.

    PubMed

    Meure, Louise A; Foster, Neil R; Dehghani, Fariba

    2008-01-01

    The aim of this review paper is to compare the potential of various techniques developed for production of homogenous, stable liposomes. Traditional techniques, such as Bangham, detergent depletion, ether/ethanol injection, reverse-phase evaporation and emulsion methods, were compared with the recent advanced techniques developed for liposome formation. The major hurdles for scaling up the traditional methods are the consumption of large quantities of volatile organic solvent, the stability and homogeneity of the liposomal product, as well as the lengthy multiple steps involved. The new methods have been designed to alleviate the current issues for liposome formulation. Dense gas liposome techniques are still in their infancy, however they have remarkable advantages in reducing the use of organic solvents, providing fast, single-stage production and producing stable, uniform liposomes. Techniques such as the membrane contactor and heating methods are also promising as they eliminate the use of organic solvent, however high temperature is still required for processing.

  9. A suture-based liver retraction method for laparoscopic bariatric procedures: results from a large case series.

    PubMed

    de la Torre, Roger; Scott, J Stephen; Cole, Emily

    2015-01-01

    Laparoscopic bariatric surgery requires retraction of the left lobe of the liver to provide adequate operative view and working space. Conventional approaches utilize a mechanical retractor and require additional incision(s), and at times an assistant. This study evaluated the safety and efficacy of a suture-based method of liver retraction in a large series of patients undergoing laparoscopic bariatric surgery. This method eliminates the need for a subxiphoid incision for mechanical retraction of the liver. Two hospitals in the Midwest with a high volume of laparoscopic bariatric cases. Retrospective chart review identified all patients undergoing bariatric surgery for whom suture-based liver retraction was selected. The left lobe of the liver is lifted, and sutures are placed across the right crus of the diaphragm and were either anchored on the abdominal wall or intraperitoneally to provide static retraction of the left lobe of the liver. In all, 487 cases were identified. Patients had a high rate of morbid obesity (83% with body mass index >40 kg/m(2)) and diabetes (34.3%). The most common bariatric procedures were Roux-en-Y gastric banding (39%) and sleeve gastrectomy (24.6%). Overall, 6 injuries to the liver were noted, only 2 of which were related to the suture-based retraction technique. Both injuries involved minor bleeding and were successfully managed during the procedure. The mean number of incisions required was 4.6. Suture-based liver retraction was found to be safe and effective in this large case series of morbidly obese patients. The rate of complications involving the technique was extremely low (.4%). Copyright © 2015 American Society for Bariatric Surgery. Published by Elsevier Inc. All rights reserved.

  10. Characterization of the electromechanical properties of EAP materials

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph; Sherrita, Stewart; Bhattachary, Kaushik; Lih, Shyh-Shiuh

    2001-01-01

    Electroactive polymers (EAP) are an emerging class of actuation materials. Their large electrically induced strains (longitudinal or bending), low density, mechanical flexibility, and ease of processing offer advantages over traditional electroactive materials. However, before the capability of these materials can be exploited, their electrical and mechanical behavior must be properly quantified. Two general types of EAP can be identified. The first type is ionic EAP, which requires relatively low voltages (<10V) to achieve large bending deflections. This class usually needs to be hydrated and electrochemical reactions may occur. The second type is Electronic-EAP and it involves electrostrictive and/or Maxwell stresses. This type of materials requires large electric fields (>100MV/m) to achieve longitudinal deformations at the range from 4 - 360%. Some of the difficulties in characterizing EAP include: nonlinear properties, large compliance (large mismatch with metal electrodes), nonhomogeneity resulting from processing, etc. To support the need for reliable data, the authors are developing characterization techniques to quantify the electroactive responses and material properties of EAP materials. The emphasis of the current study is on addressing electromechanical issues related to the ion-exchange type EAP also known as IPMC. The analysis, experiments and test results are discussed in this paper.

  11. The MESUR Mission

    NASA Technical Reports Server (NTRS)

    Squyres, S. W.

    1993-01-01

    The MESUR mission will place a network of small, robust landers on the Martian surface, making a coordinated set of observations for at least one Martian year. MESUR presents some major challenges for development of instruments, instrument deployment systems, and on board data processing techniques. The instrument payload has not yet been selected, but the straw man payload is (1) a three-axis seismometer; (2) a meteorology package that senses pressure, temperature, wind speed and direction, humidity, and sky brightness; (3) an alphaproton-X-ray spectrometer (APXS); (4) a thermal analysis/evolved gas analysis (TA/EGA) instrument; (5) a descent imager, (6) a panoramic surface imager; (7) an atmospheric structure instrument (ASI) that senses pressure, temperature, and acceleration during descent to the surface; and (8) radio science. Because of the large number of landers to be sent (about 16), all these instruments must be very lightweight. All but the descent imager and the ASI must survive landing loads that may approach 100 g. The meteorology package, seismometer, and surface imager must be able to survive on the surface for at least one Martian year. The seismometer requires deployment off the lander body. The panoramic imager and some components of the meteorology package require deployment above the lander body. The APXS must be placed directly against one or more rocks near the lander, prompting consideration of a micro rover for deployment of this instrument. The TA/EGA requires a system to acquire, contain, and heat a soil sample. Both the imagers and, especially, the seismometer will be capable of producing large volumes of data, and will require use of sophisticated data compression techniques.

  12. Visually guided tube thoracostomy insertion comparison to standard of care in a large animal model.

    PubMed

    Hernandez, Matthew C; Vogelsang, David; Anderson, Jeff R; Thiels, Cornelius A; Beilman, Gregory; Zielinski, Martin D; Aho, Johnathon M

    2017-04-01

    Tube thoracostomy (TT) is a lifesaving procedure for a variety of thoracic pathologies. The most commonly utilized method for placement involves open dissection and blind insertion. Image guided placement is commonly utilized but is limited by an inability to see distal placement location. Unfortunately, TT is not without complications. We aim to demonstrate the feasibility of a disposable device to allow for visually directed TT placement compared to the standard of care in a large animal model. Three swine were sequentially orotracheally intubated and anesthetized. TT was conducted utilizing a novel visualization device, tube thoracostomy visual trocar (TTVT) and standard of care (open technique). Position of the TT in the chest cavity were recorded using direct thoracoscopic inspection and radiographic imaging with the operator blinded to results. Complications were evaluated using a validated complication grading system. Standard descriptive statistical analyses were performed. Thirty TT were placed, 15 using TTVT technique, 15 using standard of care open technique. All of the TT placed using TTVT were without complication and in optimal position. Conversely, 27% of TT placed using standard of care open technique resulted in complications. Necropsy revealed no injury to intrathoracic organs. Visual directed TT placement using TTVT is feasible and non-inferior to the standard of care in a large animal model. This improvement in instrumentation has the potential to greatly improve the safety of TT. Further study in humans is required. Therapeutic Level II. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Automatic localization of backscattering events due to particulate in urban areas

    NASA Astrophysics Data System (ADS)

    Gaudio, P.; Gelfusa, M.; Malizia, Andrea; Parracino, Stefano; Richetta, M.; Murari, A.; Vega, J.

    2014-10-01

    Particulate matter (PM), emitted by vehicles in urban traffic, can greatly affect environment air quality and have direct implications on both human health and infrastructure integrity. The consequences for society are relevant and can impact also on national health. Limits and thresholds of pollutants emitted by vehicles are typically regulated by government agencies. In the last few years, the interest in PM emissions has grown substantially due to both air quality issues and global warming. Lidar-Dial techniques are widely recognized as a costeffective alternative to monitor large regions of the atmosphere. To maximize the effectiveness of the measurements and to guarantee reliable, automatic monitoring of large areas, new data analysis techniques are required. In this paper, an original tool, the Universal Multi-Event Locator (UMEL), is applied to the problem of automatically indentifying the time location of peaks in Lidar measurements for the detection of particulate matter emitted by anthropogenic sources like vehicles. The method developed is based on Support Vector Regression and presents various advantages with respect to more traditional techniques. In particular, UMEL is based on the morphological properties of the signals and therefore the method is insensitive to the details of the noise present in the detection system. The approach is also fully general, purely software and can therefore be applied to a large variety of problems without any additional cost. The potential of the proposed technique is exemplified with the help of data acquired during an experimental campaign in the field in Rome.

  14. Data-driven Applications for the Sun-Earth System

    NASA Astrophysics Data System (ADS)

    Kondrashov, D. A.

    2016-12-01

    Advances in observational and data mining techniques allow extracting information from the large volume of Sun-Earth observational data that can be assimilated into first principles physical models. However, equations governing Sun-Earth phenomena are typically nonlinear, complex, and high-dimensional. The high computational demand of solving the full governing equations over a large range of scales precludes the use of a variety of useful assimilative tools that rely on applied mathematical and statistical techniques for quantifying uncertainty and predictability. Effective use of such tools requires the development of computationally efficient methods to facilitate fusion of data with models. This presentation will provide an overview of various existing as well as newly developed data-driven techniques adopted from atmospheric and oceanic sciences that proved to be useful for space physics applications, such as computationally efficient implementation of Kalman Filter in radiation belts modeling, solar wind gap-filling by Singular Spectrum Analysis, and low-rank procedure for assimilation of low-altitude ionospheric magnetic perturbations into the Lyon-Fedder-Mobarry (LFM) global magnetospheric model. Reduced-order non-Markovian inverse modeling and novel data-adaptive decompositions of Sun-Earth datasets will be also demonstrated.

  15. Quantum Mechanics/Molecular Mechanics Modeling of Enzymatic Processes: Caveats and Breakthroughs.

    PubMed

    Quesne, Matthew G; Borowski, Tomasz; de Visser, Sam P

    2016-02-18

    Nature has developed large groups of enzymatic catalysts with the aim to transfer substrates into useful products, which enables biosystems to perform all their natural functions. As such, all biochemical processes in our body (we drink, we eat, we breath, we sleep, etc.) are governed by enzymes. One of the problems associated with research on biocatalysts is that they react so fast that details of their reaction mechanisms cannot be obtained with experimental work. In recent years, major advances in computational hardware and software have been made and now large (bio)chemical systems can be studied using accurate computational techniques. One such technique is the quantum mechanics/molecular mechanics (QM/MM) technique, which has gained major momentum in recent years. Unfortunately, it is not a black-box method that is easily applied, but requires careful set-up procedures. In this work we give an overview on the technical difficulties and caveats of QM/MM and discuss work-protocols developed in our groups for running successful QM/MM calculations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Advanced Kalman Filter for Real-Time Responsiveness in Complex Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Welch, Gregory Francis; Zhang, Jinghe

    2014-06-10

    Complex engineering systems pose fundamental challenges in real-time operations and control because they are highly dynamic systems consisting of a large number of elements with severe nonlinearities and discontinuities. Today’s tools for real-time complex system operations are mostly based on steady state models, unable to capture the dynamic nature and too slow to prevent system failures. We developed advanced Kalman filtering techniques and the formulation of dynamic state estimation using Kalman filtering techniques to capture complex system dynamics in aiding real-time operations and control. In this work, we looked at complex system issues including severe nonlinearity of system equations, discontinuitiesmore » caused by system controls and network switches, sparse measurements in space and time, and real-time requirements of power grid operations. We sought to bridge the disciplinary boundaries between Computer Science and Power Systems Engineering, by introducing methods that leverage both existing and new techniques. While our methods were developed in the context of electrical power systems, they should generalize to other large-scale scientific and engineering applications.« less

  17. Image Improvement Techniques

    NASA Astrophysics Data System (ADS)

    Shine, R. A.

    1997-05-01

    Over the last decade, a repertoire of techniques have been developed and/or refined to improve the quality of high spatial resolution solar movies taken from ground based observatories. These include real time image motion corrections, frame selection, phase diversity measurements of the wavefront, and extensive post processing to partially remove atmospheric distortion. Their practical application has been made possible by the increasing availability and decreasing cost of large CCD's with fast digital readouts and high speed computer workstations with large memories. Most successful have been broad band (0.3 to 10 nm) filtergram movies which can use exposure times of 10 to 30 ms, short enough to ``freeze'' atmospheric motions. Even so, only a handful of movies with excellent image quality for more than a hour have been obtained to date. Narrowband filtergrams (about 0.01 nm), such as those required for constructing magnetograms and Dopplergrams, have been more challenging although some single images approach the quality of the best continuum images. Some promising new techniques and instruments, together with persistence and good luck, should continue the progress made in the last several years.

  18. Design and test of a natural laminar flow/large Reynolds number airfoil with a high design cruise lift coefficient

    NASA Technical Reports Server (NTRS)

    Kolesar, C. E.

    1987-01-01

    Research activity on an airfoil designed for a large airplane capable of very long endurance times at a low Mach number of 0.22 is examined. Airplane mission objectives and design optimization resulted in requirements for a very high design lift coefficient and a large amount of laminar flow at high Reynolds number to increase the lift/drag ratio and reduce the loiter lift coefficient. Natural laminar flow was selected instead of distributed mechanical suction for the measurement technique. A design lift coefficient of 1.5 was identified as the highest which could be achieved with a large extent of laminar flow. A single element airfoil was designed using an inverse boundary layer solution and inverse airfoil design computer codes to create an airfoil section that would achieve performance goals. The design process and results, including airfoil shape, pressure distributions, and aerodynamic characteristics are presented. A two dimensional wind tunnel model was constructed and tested in a NASA Low Turbulence Pressure Tunnel which enabled testing at full scale design Reynolds number. A comparison is made between theoretical and measured results to establish accuracy and quality of the airfoil design technique.

  19. Watermarking-based protection of remote sensing images: requirements and possible solutions

    NASA Astrophysics Data System (ADS)

    Barni, Mauro; Bartolini, Franco; Cappellini, Vito; Magli, Enrico; Olmo, Gabriella

    2001-12-01

    Earth observation missions have recently attracted ag rowing interest form the scientific and industrial communities, mainly due to the large number of possible applications capable to exploit remotely sensed data and images. Along with the increase of market potential, the need arises for the protection of the image products from non-authorized use. Such a need is a very crucial one even because the Internet and other public/private networks have become preferred means of data exchange. A crucial issue arising when dealing with digital image distribution is copyright protection. Such a problem has been largely addressed by resorting to watermarking technology. A question that obviously arises is whether the requirements imposed by remote sensing imagery are compatible with existing watermarking techniques. On the basis of these motivations, the contribution of this work is twofold: i) assessment of the requirements imposed by the characteristics of remotely sensed images on watermark-based copyright protection ii) analysis of the state-of-the-art, and performance evaluation of existing algorithms in terms of the requirements at the previous point.

  20. Asymmetric distances for binary embeddings.

    PubMed

    Gordo, Albert; Perronnin, Florent; Gong, Yunchao; Lazebnik, Svetlana

    2014-01-01

    In large-scale query-by-example retrieval, embedding image signatures in a binary space offers two benefits: data compression and search efficiency. While most embedding algorithms binarize both query and database signatures, it has been noted that this is not strictly a requirement. Indeed, asymmetric schemes that binarize the database signatures but not the query still enjoy the same two benefits but may provide superior accuracy. In this work, we propose two general asymmetric distances that are applicable to a wide variety of embedding techniques including locality sensitive hashing (LSH), locality sensitive binary codes (LSBC), spectral hashing (SH), PCA embedding (PCAE), PCAE with random rotations (PCAE-RR), and PCAE with iterative quantization (PCAE-ITQ). We experiment on four public benchmarks containing up to 1M images and show that the proposed asymmetric distances consistently lead to large improvements over the symmetric Hamming distance for all binary embedding techniques.

  1. SEGMENTATION OF MITOCHONDRIA IN ELECTRON MICROSCOPY IMAGES USING ALGEBRAIC CURVES.

    PubMed

    Seyedhosseini, Mojtaba; Ellisman, Mark H; Tasdizen, Tolga

    2013-01-01

    High-resolution microscopy techniques have been used to generate large volumes of data with enough details for understanding the complex structure of the nervous system. However, automatic techniques are required to segment cells and intracellular structures in these multi-terabyte datasets and make anatomical analysis possible on a large scale. We propose a fully automated method that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy (EM) images. The main idea is to use algebraic curves to extract shape features together with texture features from image patches. Then, these powerful features are used to learn a random forest classifier, which can predict mitochondria locations precisely. Finally, the algebraic curves together with regional information are used to segment the mitochondria at the predicted locations. We demonstrate that our method outperforms the state-of-the-art algorithms in segmentation of mitochondria in EM images.

  2. EVA Design, Verification, and On-Orbit Operations Support Using Worksite Analysis

    NASA Technical Reports Server (NTRS)

    Hagale, Thomas J.; Price, Larry R.

    2000-01-01

    The International Space Station (ISS) design is a very large and complex orbiting structure with thousands of Extravehicular Activity (EVA) worksites. These worksites are used to assemble and maintain the ISS. The challenge facing EVA designers was how to design, verify, and operationally support such a large number of worksites within cost and schedule. This has been solved through the practical use of computer aided design (CAD) graphical techniques that have been developed and used with a high degree of success over the past decade. The EVA design process allows analysts to work concurrently with hardware designers so that EVA equipment can be incorporated and structures configured to allow for EVA access and manipulation. Compliance with EVA requirements is strictly enforced during the design process. These techniques and procedures, coupled with neutral buoyancy underwater testing, have proven most valuable in the development, verification, and on-orbit support of planned or contingency EVA worksites.

  3. High-Sensitivity X-ray Polarimetry with Amorphous Silicon Active-Matrix Pixel Proportional Counters

    NASA Technical Reports Server (NTRS)

    Black, J. K.; Deines-Jones, P.; Jahoda, K.; Ready, S. E.; Street, R. A.

    2003-01-01

    Photoelectric X-ray polarimeters based on pixel micropattern gas detectors (MPGDs) offer order-of-magnitude improvement in sensitivity over more traditional techniques based on X-ray scattering. This new technique places some of the most interesting astronomical observations within reach of even a small, dedicated mission. The most sensitive instrument would be a photoelectric polarimeter at the focus of 2 a very large mirror, such as the planned XEUS. Our efforts are focused on a smaller pathfinder mission, which would achieve its greatest sensitivity with large-area, low-background, collimated polarimeters. We have recently demonstrated a MPGD polarimeter using amorphous silicon thin-film transistor (TFT) readout suitable for the focal plane of an X-ray telescope. All the technologies used in the demonstration polarimeter are scalable to the areas required for a high-sensitivity collimated polarimeter. Leywords: X-ray polarimetry, particle tracking, proportional counter, GEM, pixel readout

  4. Vectorization with SIMD extensions speeds up reconstruction in electron tomography.

    PubMed

    Agulleiro, J I; Garzón, E M; García, I; Fernández, J J

    2010-06-01

    Electron tomography allows structural studies of cellular structures at molecular detail. Large 3D reconstructions are needed to meet the resolution requirements. The processing time to compute these large volumes may be considerable and so, high performance computing techniques have been used traditionally. This work presents a vector approach to tomographic reconstruction that relies on the exploitation of the SIMD extensions available in modern processors in combination to other single processor optimization techniques. This approach succeeds in producing full resolution tomograms with an important reduction in processing time, as evaluated with the most common reconstruction algorithms, namely WBP and SIRT. The main advantage stems from the fact that this approach is to be run on standard computers without the need of specialized hardware, which facilitates the development, use and management of programs. Future trends in processor design open excellent opportunities for vector processing with processor's SIMD extensions in the field of 3D electron microscopy.

  5. Lithium wall conditioning by high frequency pellet injection in RFX-mod

    NASA Astrophysics Data System (ADS)

    Innocente, P.; Mansfield, D. K.; Roquemore, A. L.; Agostini, M.; Barison, S.; Canton, A.; Carraro, L.; Cavazzana, R.; De Masi, G.; Fassina, A.; Fiameni, S.; Grando, L.; Rais, B.; Rossetto, F.; Scarin, P.

    2015-08-01

    In the RFX-mod reversed field pinch experiment, lithium wall conditioning has been tested with multiple scopes: to improve density control, to reduce impurities and to increase energy and particle confinement time. Large single lithium pellet injection, lithium capillary-pore system and lithium evaporation has been used for lithiumization. The last two methods, which presently provide the best results in tokamak devices, have limited applicability in the RFX-mod device due to the magnetic field characteristics and geometrical constraints. On the other side, the first mentioned technique did not allow injecting large amount of lithium. To improve the deposition, recently in RFX-mod small lithium multi-pellets injection has been tested. In this paper we compare lithium multi-pellets injection to the other techniques. Multi-pellets gave more uniform Li deposition than evaporator, but provided similar effects on plasma parameters, showing that further optimizations are required.

  6. Towards AI-powered personalization in MOOC learning

    NASA Astrophysics Data System (ADS)

    Yu, Han; Miao, Chunyan; Leung, Cyril; White, Timothy John

    2017-12-01

    Massive Open Online Courses (MOOCs) represent a form of large-scale learning that is changing the landscape of higher education. In this paper, we offer a perspective on how advances in artificial intelligence (AI) may enhance learning and research on MOOCs. We focus on emerging AI techniques including how knowledge representation tools can enable students to adjust the sequence of learning to fit their own needs; how optimization techniques can efficiently match community teaching assistants to MOOC mediation tasks to offer personal attention to learners; and how virtual learning companions with human traits such as curiosity and emotions can enhance learning experience on a large scale. These new capabilities will also bring opportunities for educational researchers to analyse students' learning skills and uncover points along learning paths where students with different backgrounds may require different help. Ethical considerations related to the application of AI in MOOC education research are also discussed.

  7. An Instrument to Measure Elemental Energy Spectra of Cosmic Ray Nuclei Up to 10(exp 16) eV

    NASA Technical Reports Server (NTRS)

    Adams, J.; Bashindzhagyan, G.; Chilingarian, A.; Drury, L.; Egorov, N.; Golubkov,S.; Korotkova, N.; Panasyuk, M.; Podorozhnyi, D.; Procqureur, J.

    2000-01-01

    A longstanding goal of cosmic ray research is to measure the elemental energy spectra of cosmic rays up to and through the "knee" (approx. equal to 3 x 10 (exp 15) eV. It is not currently feasible to achieve this goal with an ionization calorimeter because the mass required to be deployed in Earth orbit is very large (at least 50 tonnes). An alternative method will be presented. This is based on measuring the primary particle energy by determining the angular distribution of secondaries produced in a target layer using silicon microstrip detector technology. The proposed technique can be used over a wide range of energies (10 (exp 11)- 10 (exp 16) eV) and gives an energy resolution of 60% or better. Based on this technique, a design for a new lightweight instrument with a large aperture (KLEM) will be described.

  8. Reducing computational costs in large scale 3D EIT by using a sparse Jacobian matrix with block-wise CGLS reconstruction.

    PubMed

    Yang, C L; Wei, H Y; Adler, A; Soleimani, M

    2013-06-01

    Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.

  9. Knee arthrodesis with lengthening: experience of using Ilizarov techniques to salvage large asymmetric defects following infected peri-articular fractures.

    PubMed

    Barwick, Thomas W; Montgomery, Richard J

    2013-08-01

    We present four patients with large bone defects due to infected internal fixation of knee condylar fractures. All were treated by debridement of bone and soft tissue and stabilisation with flap closure if required, followed by bone transport arthrodesis of the knee with simultaneous lengthening. Four patients (three male and one female), mean age 46.5 years (37-57 years), with posttraumatic osteomyelitis at the knee (three proximal tibia and one distal femur) were treated by debridement of infected tissue and removal of internal fixation. Substantial condylar bone defects resulted on the affected side of the knee joint (6-10 cm) with loss of the extensor mechanism in all tibial cases. Two patients required muscle flaps after debridement. All patients received intravenous antibiotics for at least 6 weeks. Bone transport with a circular frame was used to achieve an arthrodesis whilst simultaneously restoring a functional limb length. In three cases a 'peg in socket' docking technique was fashioned to assist stability and subsequent consolidation of the arthrodesis. Arthrodesis of the knee, free of recurrent infection, was successfully achieved in all cases. None has since required further surgery. Debridement to union took an average of 25 months (19-31 months). The median number of interventions undertaken was 9 (8-12). Two patients developed deep vein thrombosis (DVT), one complicated by PE, which delayed treatment. Two required surgical correction of pre-existent equinus contracture using frames. The median limb length discrepancy (LLD) at the end of treatment was 3 cm (3-4 cm). None has required subsequent amputation. Bone loss and infection both reduce the success rate of any arthrodesis. However, by optimising the host environment with eradication of infection by radical debridement, soft-tissue flaps when necessary and bone transport techniques to close the defect, one can achieve arthrodesis and salvage a useful limb. The residual LLD can result from not accounting for later impaction at peg and socket sites, which had the effect of increasing LLD beyond the desirable amount. We therefore recommend continuing the lengthening for an additional 1-2 cm to allow for this. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Molecular andrology as related to sperm DNA fragmentation/sperm chromatin biotechnology.

    PubMed

    Shafik, A; Shafik, A A; Shafik, I; El Sibai, O

    2006-01-01

    Genetic male infertility occurs throughout the life cycle from genetic traits carried by the sperm, to fertilization and post-fertilization genome alterations, and subsequent developmental changes in the blastocyst and fetus as well as errors in meiosis and abnormalities in spermatogenesis/spermatogenesis. Genes encoding proteins for normal development include SRY, SOX9, INSL3 and LGR8. Genetic abnormalities affect spermatogenesis whereas polymorphisms affect receptor affinity and hormone bioactivity. Transgenic animal models, the human genome project, and other techniques have identified numerous genes related to male fertility. Several techniques have been developed to measure the amount of sperm DNA damage in an effort to identify more objective parameters for evaluation of infertile men. The integrity of sperm DNA influences a couple's fertility and helps predict the chances of pregnancy and its successful outcome. The available tests of sperm DNA damage require additional large-scale clinical trials before their integration into routine clinical practice. The physiological/molecular integrity of sperm DNA is a novel parameter of semen quality and a potential fertility predictor. Although DNA integrity assessment appears to be a logical biomarker of sperm quality, it is not being assessed as a routine part of semen analysis by clinical andrologists. Extensive investigation has been conducted for the comparative evaluation of these techniques. However, some of these techniques require expensive instrumentation for optimal and unbiased analysis, are labor intensive, or require the use of enzymes whose activity and accessibility to DNA breaks may be irregular. Thus, these techniques are recommended for basic research rather than for routine andrology laboratories.

  11. Evaluation of normal findings using a detailed and focused technique for transcutaneous abdominal ultrasonography in the horse

    PubMed Central

    2014-01-01

    Background Ultrasonography is an important diagnostic tool in the investigation of abdominal disease in the horse. Several factors may affect the ability to image different structures within the abdomen. The aim of the study was to describe the repeatability of identification of abdominal structures in normal horses using a detailed ultrasonographic examination technique and using a focused, limited preparation technique. Methods A detailed abdominal ultrasound examination was performed in five normal horses, repeated on five occasions (total of 25 examinations). The abdomen was divided into ten different imaging sites, and structures identified in each site were recorded. Five imaging sites were then selected for a single focused ultrasound examination in 20 normal horses. Limited patient preparation was performed. Structures were recorded as ‘identified’ if ultrasonographic features could be distinguished. The location of organs and their frequency of identification were recorded. Data from both phases were analysed to determine repeatability of identification of structures in each examination (irrespective of imaging site), and for each imaging site. Results Caecum, colon, spleen, liver and right kidney were repeatably identified using the detailed technique, and had defined locations. Large colon and right kidney were identified in 100% of examinations with both techniques. Liver, spleen, caecum, duodenum and other small intestine were identified more frequently with the detailed examination. Small intestine was most frequently identified in the ventral abdomen, its identification varied markedly within and between horses, and required repeated examinations in some horses. Left kidney could not be identified in every horse using either technique. Sacculated colon was identified in all ventral sites, and was infrequently identified in dorsal sites. Conclusions Caecum, sacculated large intestine, spleen, liver and right kidney were consistently identified with both techniques. There were some normal variations which should be considered when interpreting ultrasonographic findings in clinical cases: left kidney was not always identified, sacculated colon was occasionally identified in dorsal flank sites. Multiple imaging sites and repeated examinations may be required to identify small intestine. A focused examination identified most key structures, but has some limitations compared to a detailed examination. PMID:25238559

  12. Arbuscular mycorrhizal fungi spore propagation using single spore as starter inoculum and a plant host.

    PubMed

    Selvakumar, G; Shagol, C C; Kang, Y; Chung, B N; Han, S G; Sa, T M

    2018-06-01

    The propagation of pure cultures of arbuscular mycorrhizal fungal (AMF) is an essential requirement for their large-scale agricultural application and commercialization as biofertilizers. The present study aimed to propagate AMF using the single-spore inoculation technique and compare their propagation ability with the known reference spores. Arbuscular mycorrhizal fungal spores were collected from salt-affected Saemangeum reclaimed soil in South Korea. The technique involved inoculation of sorghum-sudangrass (Sorghum bicolor L.) seedlings with single, healthy spores on filter paper followed by the transfer of successfully colonized seedlings to 1-kg capacity pots containing sterilized soil. After the first plant cycle, the contents were transferred to 2·5-kg capacity pots containing sterilized soil. Among the 150 inoculated seedlings, only 27 seedlings were colonized by AMF spores. After 240 days, among the 27 seedlings, five inoculants resulted in the production of over 500 spores. The 18S rDNA sequencing of spores revealed that the spores produced through single-spore inoculation method belonged to Gigaspora margarita, Claroideoglomus lamellosum and Funneliformis mosseae. Furthermore, indigenous spore F. mosseae M-1 reported a higher spore count than the reference spores. The AMF spores produced using the single-spore inoculation technique may serve as potential bio-inoculants with an advantage of being more readily adopted by farmers due to the lack of requirement of a skilled technique in spore propagation. The results of the current study describe the feasible and cost-effective method to mass produce AMF spores for large-scale application. The AMF spores obtained from this method can effectively colonize plant roots and may be easily introduced to the new environment. © 2018 The Society for Applied Microbiology.

  13. GPU accelerated fuzzy connected image segmentation by using CUDA.

    PubMed

    Zhuge, Ying; Cao, Yong; Miller, Robert W

    2009-01-01

    Image segmentation techniques using fuzzy connectedness principles have shown their effectiveness in segmenting a variety of objects in several large applications in recent years. However, one problem of these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays commodity graphics hardware provides high parallel computing power. In this paper, we present a parallel fuzzy connected image segmentation algorithm on Nvidia's Compute Unified Device Architecture (CUDA) platform for segmenting large medical image data sets. Our experiments based on three data sets with small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 7.2x, 7.3x, and 14.4x, correspondingly, for the three data sets over the sequential implementation of fuzzy connected image segmentation algorithm on CPU.

  14. Fabrication

    NASA Technical Reports Server (NTRS)

    Angel, Roger; Helms, Richard; Bilbro, Jim; Brown, Norman; Eng, Sverre; Hinman, Steve; Hull-Allen, Greg; Jacobs, Stephen; Keim, Robert; Ulmer, Melville

    1992-01-01

    What aspects of optical fabrication technology need to be developed so as to facilitate existing planned missions, or enable new ones? Throughout the submillimeter to UV wavelengths, the common goal is to push technology to the limits to make the largest possible apertures that are diffraction limited. At any one wavelength, the accuracy of the surface must be better than lambda/30 (rms error). The wavelength range is huge, covering four orders of magnitude from 1 mm to 100 nm. At the longer wavelengths, diffraction limited surfaces can be shaped with relatively crude techniques. The challenge in their fabrication is to make as large as possible a reflector, given the weight and volume constraints of the launch vehicle. The limited cargo diameter of the shuttle has led in the past to emphasis on deployable or erectable concepts such as the Large Deployable Reflector (LDR), which was studied by NASA for a submillimeter astrophysics mission. Replication techniques that can be used to produce light, low-cost reflecting panels are of great interest for this class of mission. At shorter wavelengths, in the optical and ultraviolet, optical fabrication will tax to the limit the most refined polishing methods. Methods of mechanical and thermal stabilization of the substrate will be severely stressed. In the thermal infrared, the need for large aperture is tempered by the even stronger need to control the telescope's thermal emission by cooled or cryogenic operation. Thus, the SIRTF mirror at 1 meter is not large and does not require unusually high accuracy, but the fabrication process must produce a mirror that is the right shape at a temperature of 4 K. Future large cooled mirrors will present more severe problems, especially if they must also be accurate enough to work at optical wavelengths. At the very shortest wavelengths accessible to reflecting optics, in the x-ray domain, the very low count fluxes of high energy photons place a premium on the collecting area. It is not necessary to reach or even approach the diffraction limit, which would demand subnanometer fabrication and figure control. Replication techniques that produce large very lightweight surfaces are of interest for x-ray optics just as they are for the submillimeter region. Optical fabrication requirements are examined in more detail for missions in each of the three spectral regions of interest in astrophysics.

  15. Fabrication

    NASA Astrophysics Data System (ADS)

    Angel, Roger; Helms, Richard; Bilbro, Jim; Brown, Norman; Eng, Sverre; Hinman, Steve; Hull-Allen, Greg; Jacobs, Stephen; Keim, Robert; Ulmer, Melville

    1992-08-01

    What aspects of optical fabrication technology need to be developed so as to facilitate existing planned missions, or enable new ones? Throughout the submillimeter to UV wavelengths, the common goal is to push technology to the limits to make the largest possible apertures that are diffraction limited. At any one wavelength, the accuracy of the surface must be better than lambda/30 (rms error). The wavelength range is huge, covering four orders of magnitude from 1 mm to 100 nm. At the longer wavelengths, diffraction limited surfaces can be shaped with relatively crude techniques. The challenge in their fabrication is to make as large as possible a reflector, given the weight and volume constraints of the launch vehicle. The limited cargo diameter of the shuttle has led in the past to emphasis on deployable or erectable concepts such as the Large Deployable Reflector (LDR), which was studied by NASA for a submillimeter astrophysics mission. Replication techniques that can be used to produce light, low-cost reflecting panels are of great interest for this class of mission. At shorter wavelengths, in the optical and ultraviolet, optical fabrication will tax to the limit the most refined polishing methods. Methods of mechanical and thermal stabilization of the substrate will be severely stressed. In the thermal infrared, the need for large aperture is tempered by the even stronger need to control the telescope's thermal emission by cooled or cryogenic operation. Thus, the SIRTF mirror at 1 meter is not large and does not require unusually high accuracy, but the fabrication process must produce a mirror that is the right shape at a temperature of 4 K. Future large cooled mirrors will present more severe problems, especially if they must also be accurate enough to work at optical wavelengths. At the very shortest wavelengths accessible to reflecting optics, in the x-ray domain, the very low count fluxes of high energy photons place a premium on the collecting area. It is not necessary to reach or even approach the diffraction limit, which would demand subnanometer fabrication and figure control. Replication techniques that produce large very lightweight surfaces are of interest for x-ray optics just as they are for the submillimeter region. Optical fabrication requirements are examined in more detail for missions in each of the three spectral regions of interest in astrophysics.

  16. Evaluation of LANDSAT multispectral scanner images for mapping altered rocks in the east Tintic Mountains, Utah

    NASA Technical Reports Server (NTRS)

    Rowan, L. C.; Abrams, M. J. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. Positive findings of earlier evaluations of the color-ratio compositing technique for mapping limonitic altered rocks in south-central Nevada are confirmed, but important limitations in the approach used are pointed out. These limitations arise from environmental, geologic, and image processing factors. The greater vegetation density in the East Tintic Mountains required several modifications in procedures to improve the overall mapping accuracy of the CRC approach. Large format ratio images provide better internal registration of the diazo films and avoids the problems associated with magnifications required in the original procedure. Use of the Linoscan 204 color recognition scanner permits accurate consistent extraction of the green pixels representing limonitic bedrock maps that can be used for mapping at large scales as well as for small scale reconnaissance.

  17. Simulation analysis of a novel high efficiency silicon solar cell

    NASA Technical Reports Server (NTRS)

    Mokashi, Anant R.; Daud, T.; Kachare, A. H.

    1985-01-01

    It is recognized that crystalline silicon photovoltaic module efficiency of 15 percent or more is required for cost-effective photovoltaic energy utilization. This level of module efficiency requires large-area encapsulated production cell efficiencies in the range of 18 to 20 percent. Though the theoretical maximum of silicon solar cell efficiency for an idealized case is estimated to be around 30 percent, practical performance of cells to-date are considerably below this limit. This is understood to be largely a consequence of minority carrier losses in the bulk as well as at all surfaces including those under the metal contacts. In this paper a novel device design with special features to reduce bulk and surface recombination losses is evaluated using numerical analysis technique. Details of the numerical model, cell design, and analysis results are presented.

  18. Linear Approximation SAR Azimuth Processing Study

    NASA Technical Reports Server (NTRS)

    Lindquist, R. B.; Masnaghetti, R. K.; Belland, E.; Hance, H. V.; Weis, W. G.

    1979-01-01

    A segmented linear approximation of the quadratic phase function that is used to focus the synthetic antenna of a SAR was studied. Ideal focusing, using a quadratic varying phase focusing function during the time radar target histories are gathered, requires a large number of complex multiplications. These can be largely eliminated by using linear approximation techniques. The result is a reduced processor size and chip count relative to ideally focussed processing and a correspondingly increased feasibility for spaceworthy implementation. A preliminary design and sizing for a spaceworthy linear approximation SAR azimuth processor meeting requirements similar to those of the SEASAT-A SAR was developed. The study resulted in a design with approximately 1500 IC's, 1.2 cubic feet of volume, and 350 watts of power for a single look, 4000 range cell azimuth processor with 25 meters resolution.

  19. Rapidly converging multigrid reconstruction of cone-beam tomographic data

    NASA Astrophysics Data System (ADS)

    Myers, Glenn R.; Kingston, Andrew M.; Latham, Shane J.; Recur, Benoit; Li, Thomas; Turner, Michael L.; Beeching, Levi; Sheppard, Adrian P.

    2016-10-01

    In the context of large-angle cone-beam tomography (CBCT), we present a practical iterative reconstruction (IR) scheme designed for rapid convergence as required for large datasets. The robustness of the reconstruction is provided by the "space-filling" source trajectory along which the experimental data is collected. The speed of convergence is achieved by leveraging the highly isotropic nature of this trajectory to design an approximate deconvolution filter that serves as a pre-conditioner in a multi-grid scheme. We demonstrate this IR scheme for CBCT and compare convergence to that of more traditional techniques.

  20. Production of Bacteriophages by Listeria Cells Entrapped in Organic Polymers.

    PubMed

    Roy, Brigitte; Philippe, Cécile; Loessner, Martin J; Goulet, Jacques; Moineau, Sylvain

    2018-06-13

    Applications for bacteriophages as antimicrobial agents are increasing. The industrial use of these bacterial viruses requires the production of large amounts of suitable strictly lytic phages, particularly for food and agricultural applications. This work describes a new approach for phage production. Phages H387 ( Siphoviridae ) and A511 ( Myoviridae ) were propagated separately using Listeria ivanovii host cells immobilised in alginate beads. The same batch of alginate beads could be used for four successive and efficient phage productions. This technique enables the production of large volumes of high-titer phage lysates in continuous or semi-continuous (fed-batch) cultures.

  1. The Electrophysiological Biosensor for Batch-Measurement of Cell Signals

    NASA Astrophysics Data System (ADS)

    Suzuki, Kengo; Tanabe, Masato; Ezaki, Takahiro; Konishi, Satoshi; Oka, Hiroaki; Ozaki, Nobuhiko

    This paper presents the development of electrophysiological biosensor. The developed sensor allows a batch-measurement by detecting all signals from a large number of cells together. The developed sensor employs the same measurement principle as the patch-clamp technique. A single cell is sucked and clamped in a micro hole with detecting electrode. Detecting electrodes in arrayed micro holes are connected together for the batch-measurement of signals a large number of cell signals. Furthermore, an array of sensors for batch-measurement is designed to improve measurement-throughput to satisfy requirements for the drug screening application.

  2. Method for large and rapid terahertz imaging

    DOEpatents

    Williams, Gwyn P.; Neil, George R.

    2013-01-29

    A method of large-scale active THz imaging using a combination of a compact high power THz source (>1 watt), an optional optical system, and a camera for the detection of reflected or transmitted THz radiation, without the need for the burdensome power source or detector cooling systems required by similar prior art such devices. With such a system, one is able to image, for example, a whole person in seconds or less, whereas at present, using low power sources and scanning techniques, it takes several minutes or even hours to image even a 1 cm.times.1 cm area of skin.

  3. Application of symbolic/numeric matrix solution techniques to the NASTRAN program

    NASA Technical Reports Server (NTRS)

    Buturla, E. M.; Burroughs, S. H.

    1977-01-01

    The matrix solving algorithm of any finite element algorithm is extremely important since solution of the matrix equations requires a large amount of elapse time due to null calculations and excessive input/output operations. An alternate method of solving the matrix equations is presented. A symbolic processing step followed by numeric solution yields the solution very rapidly and is especially useful for nonlinear problems.

  4. Study of thermal management for space platform applications: Unmanned modular thermal management and radiator technologies

    NASA Technical Reports Server (NTRS)

    Oren, J. A.

    1981-01-01

    Candidate techniques for thermal management of unmanned modules docked to a large 250 kW platform were evaluated. Both automatically deployed and space constructed radiator systems were studied to identify characteristics and potential problems. Radiator coating requirements and current state-of-the-art were identified. An assessment of the technology needs was made and advancements were recommended.

  5. Simplified Rotation In Acoustic Levitation

    NASA Technical Reports Server (NTRS)

    Barmatz, M. B.; Gaspar, M. S.; Trinh, E. H.

    1989-01-01

    New technique based on old discovery used to control orientation of object levitated acoustically in axisymmetric chamber. Method does not require expensive equipment like additional acoustic drivers of precisely adjustable amplitude, phase, and frequency. Reflecting object acts as second source of sound. If reflecting object large enough, close enough to levitated object, or focuses reflected sound sufficiently, Rayleigh torque exerted on levitated object by reflected sound controls orientation of object.

  6. Natural laminar flow airfoil analysis and trade studies

    NASA Technical Reports Server (NTRS)

    1979-01-01

    An analysis of an airfoil for a large commercial transport cruising at Mach 0.8 and the use of advanced computer techniques to perform the analysis are described. Incorporation of the airfoil into a natural laminar flow transport configuration is addressed and a comparison of fuel requirements and operating costs between the natural laminar flow transport and an equivalent turbulent flow transport is addressed.

  7. Efficient and Robust Data Collection Using Compact Micro Hardware, Distributed Bus Architectures and Optimizing Software

    NASA Technical Reports Server (NTRS)

    Chau, Savio; Vatan, Farrokh; Randolph, Vincent; Baroth, Edmund C.

    2006-01-01

    Future In-Space propulsion systems for exploration programs will invariably require data collection from a large number of sensors. Consider the sensors needed for monitoring several vehicle systems states of health, including the collection of structural health data, over a large area. This would include the fuel tanks, habitat structure, and science containment of systems required for Lunar, Mars, or deep space exploration. Such a system would consist of several hundred or even thousands of sensors. Conventional avionics system design will require these sensors to be connected to a few Remote Health Units (RHU), which are connected to robust, micro flight computers through a serial bus. This results in a large mass of cabling and unacceptable weight. This paper first gives a survey of several techniques that may reduce the cabling mass for sensors. These techniques can be categorized into four classes: power line communication, serial sensor buses, compound serial buses, and wireless network. The power line communication approach uses the power line to carry both power and data, so that the conventional data lines can be eliminated. The serial sensor bus approach reduces most of the cabling by connecting all the sensors with a single (or redundant) serial bus. Many standard buses for industrial control and sensor buses can support several hundreds of nodes, however, have not been space qualified. Conventional avionics serial buses such as the Mil-Std-1553B bus and IEEE 1394a are space qualified but can support only a limited number of nodes. The third approach is to combine avionics buses to increase their addressability. The reliability, EMI/EMC, and flight qualification issues of wireless networks have to be addressed. Several wireless networks such as the IEEE 802.11 and Ultra Wide Band are surveyed in this paper. The placement of sensors can also affect cable mass. Excessive sensors increase the number of cables unnecessarily. Insufficient number of sensors may not provide adequate coverage of the system. This paper also discusses an optimal technique to place and validate sensors.

  8. Lightweight high-performance 1-4 meter class spaceborne mirrors: emerging technology for demanding spaceborne requirements

    NASA Astrophysics Data System (ADS)

    Hull, Tony; Hartmann, Peter; Clarkson, Andrew R.; Barentine, John M.; Jedamzik, Ralf; Westerhoff, Thomas

    2010-07-01

    Pending critical spaceborne requirements, including coronagraphic detection of exoplanets, require exceptionally smooth mirror surfaces, aggressive lightweighting, and low-risk cost-effective optical manufacturing methods. Simultaneous development at Schott for production of aggressively lightweighted (>90%) Zerodur® mirror blanks, and at L-3 Brashear for producing ultra-smooth surfaces on Zerodur®, will be described. New L-3 techniques for large-mirror optical fabrication include Computer Controlled Optical Surfacing (CCOS) pioneered at L-3 Tinsley, and the world's largest MRF machine in place at L-3 Brashear. We propose that exceptional mirrors for the most critical spaceborne applications can now be produced with the technologies described.

  9. Link Correlation Based Transmit Sector Antenna Selection for Alamouti Coded OFDM

    NASA Astrophysics Data System (ADS)

    Ahn, Chang-Jun

    In MIMO systems, the deployment of a multiple antenna technique can enhance the system performance. However, since the cost of RF transmitters is much higher than that of antennas, there is growing interest in techniques that use a larger number of antennas than the number of RF transmitters. These methods rely on selecting the optimal transmitter antennas and connecting them to the respective. In this case, feedback information (FBI) is required to select the optimal transmitter antenna elements. Since FBI is control overhead, the rate of the feedback is limited. This motivates the study of limited feedback techniques where only partial or quantized information from the receiver is conveyed back to the transmitter. However, in MIMO/OFDM systems, it is difficult to develop an effective FBI quantization method for choosing the space-time, space-frequency, or space-time-frequency processing due to the numerous subchannels. Moreover, MIMO/OFDM systems require antenna separation of 5 ∼ 10 wavelengths to keep the correlation coefficient below 0.7 to achieve a diversity gain. In this case, the base station requires a large space to set up multiple antennas. To reduce these problems, in this paper, we propose the link correlation based transmit sector antenna selection for Alamouti coded OFDM without FBI.

  10. The evolution of modern agriculture and its future with biotechnology.

    PubMed

    Harlander, Susan K

    2002-06-01

    Since the dawn of agriculture, humans have been manipulating crops to enhance their quality and yield. Via conventional breeding, seed producers have developed the modern corn hybrids and wheat commonly grown today. Newer techniques, such as radiation breeding, enhanced the seed producers' ability to develop new traits in crops. Then in the 1980's-1990's, scientists began applying genetic engineering techniques to improve crop quality and yield. In contrast to earlier breeding methods, these techniques raised questions about their safety to consumers and the environment. This paper provides an overview of the kinds of genetically modified crops developed and marketed to date and the value they provide farmers and consumers. The safety assessment process required for these crops is contrasted with the lack of a formal process required for traditionally bred crops. While European consumers have expressed concern about foods and animal feeds containing ingredients from genetically modified crops, Americans have largely been unconcerned or unaware of the presence of genetically modified foods on the market. This difference in attitude is reflected in Europe's decision to label foods containing genetically modified ingredients while no such labeling is required in the U.S. In the future, genetic modification will produce a variety of new products with enhanced nutritional or quality attributes.

  11. Thermal Imaging of Convecting Opaque Fluids using Ultrasound

    NASA Technical Reports Server (NTRS)

    Xu, Hongzhou; Fife, Sean; Andereck, C. David

    2002-01-01

    An ultrasound technique has been developed to non-intrusively image temperature fields in small-scale systems of opaque fluids undergoing convection. Fluids such as molten metals, semiconductors, and polymers are central to many industrial processes, and are often found in situations where natural convection occurs, or where thermal gradients are otherwise important. However, typical thermal and velocimetric diagnostic techniques rely upon transparency of the fluid and container, or require the addition of seed particles, or require mounting probes inside the fluid, all of which either fail altogether in opaque fluids, or necessitate significant invasion of the flow and/or modification of the walls of the container to allow access to the fluid. The idea behind our work is to use the temperature dependence of sound velocity, and the ease of propagation of ultrasound through fluids and solids, to probe the thermal fields of convecting opaque fluids non-intrusively and without the use of seed particles. The technique involves the timing of the return echoes from ultrasound pulses, a variation on an approach used previously in large-scale systems.

  12. Water reuse systems: A review of the principal components

    USGS Publications Warehouse

    Lucchetti, G.; Gray, G.A.

    1988-01-01

    Principal components of water reuse systems include ammonia removal, disease control, temperature control, aeration, and particulate filtration. Effective ammonia removal techniques include air stripping, ion exchange, and biofiltration. Selection of a particular technique largely depends on site-specific requirements (e.g., space, existing water quality, and fish densities). Disease control, although often overlooked, is a major problem in reuse systems. Pathogens can be controlled most effectively with ultraviolet radiation, ozone, or chlorine. Simple and inexpensive methods are available to increase oxygen concentration and eliminate gas supersaturation, these include commercial aerators, air injectors, and packed columns. Temperature control is a major advantage of reuse systems, but the equipment required can be expensive, particularly if water temperature must be rigidly controlled and ambient air temperature fluctuates. Filtration can be readily accomplished with a hydrocyclone or sand filter that increases overall system efficiency. Based on criteria of adaptability, efficiency, and reasonable cost, we recommend components for a small water reuse system.

  13. System analysis in rotorcraft design: The past decade

    NASA Technical Reports Server (NTRS)

    Galloway, Thomas L.

    1988-01-01

    Rapid advances in the technology of electronic digital computers and the need for an integrated synthesis approach in developing future rotorcraft programs has led to increased emphasis on system analysis techniques in rotorcraft design. The task in systems analysis is to deal with complex, interdependent, and conflicting requirements in a structured manner so rational and objective decisions can be made. Whether the results are wisdom or rubbish depends upon the validity and sometimes more importantly, the consistency of the inputs, the correctness of the analysis, and a sensible choice of measures of effectiveness to draw conclusions. In rotorcraft design this means combining design requirements, technology assessment, sensitivity analysis and reviews techniques currently in use by NASA and Army organizations in developing research programs and vehicle specifications for rotorcraft. These procedures span simple graphical approaches to comprehensive analysis on large mainframe computers. Examples of recent applications to military and civil missions are highlighted.

  14. Relative Attitude Determination of Earth Orbiting Formations Using GPS Receivers

    NASA Technical Reports Server (NTRS)

    Lightsey, E. Glenn

    2004-01-01

    Satellite formation missions require the precise determination of both the position and attitude of multiple vehicles to achieve the desired objectives. In order to support the mission requirements for these applications, it is necessary to develop techniques for representing and controlling the attitude of formations of vehicles. A generalized method for representing the attitude of a formation of vehicles has been developed. The representation may be applied to both absolute and relative formation attitude control problems. The technique is able to accommodate formations of arbitrarily large number of vehicles. To demonstrate the formation attitude problem, the method is applied to the attitude determination of a simple leader-follower along-track orbit formation. A multiplicative extended Kalman filter is employed to estimate vehicle attitude. In a simulation study using GPS receivers as the attitude sensors, the relative attitude between vehicles in the formation is determined 3 times more accurately than the absolute attitude.

  15. Proposed techniques for launching instrumented balloons into tornadoes

    NASA Technical Reports Server (NTRS)

    Grant, F. C.

    1971-01-01

    A method is proposed to introduce instrumented balloons into tornadoes by means of the radial pressure gradient, which supplies a buoyancy force driving to the center. Presented are analytical expressions, verified by computer calculations, which show the possibility of introducing instrumented balloons into tornadoes at or below the cloud base. The times required to reach the center are small enough that a large fraction of tornadoes are suitable for the technique. An experimental procedure is outlined in which a research airplane puts an instrumented, self-inflating balloon on the track ahead of the tornado. The uninflated balloon waits until the tornado closes to, typically, 750 meters; then it quickly inflates and spirals up and into the core, taking roughly 3 minutes. Since the drive to the center is automatically produced by the radial pressure gradient, a proper launch radius is the only guidance requirement.

  16. Propellant management for low thrust chemical propulsion systems

    NASA Technical Reports Server (NTRS)

    Hamlyn, K. M.; Dergance, R. H.; Aydelott, J. C.

    1981-01-01

    Low-thrust chemical propulsion systems (LTPS) will be required for orbital transfer of large space systems (LSS). The work reported in this paper was conducted to determine the propellant requirements, preferred propellant management technique, and propulsion system sizes for the LTPS. Propellants were liquid oxygen (LO2) combined with liquid hydrogen (LH2), liquid methane or kerosene. Thrust levels of 100, 500, and 1000 lbf were combined with 1, 4, and 8 perigee burns for transfer from low earth orbit to geosynchronous earth orbit. This matrix of systems was evaluated with a multilayer insulation (MLI) or a spray-on-foam insulation. Vehicle sizing results indicate that a toroidal tank configuration is needed for the LO2/LH2 system. Multiple perigee burns and MLI allow far superior LSS payload capability. Propellant settling, combined with a single screen device, was found to be the lightest and least complex propellant management technique.

  17. Near-field measurement facility plans at Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Sharp, R. G.

    1983-01-01

    The direction of future antenna technology will be toward antennas which are large, both physically and electrically, will operate at frequencies up to 60 GHz, and are non-reciprocal and complex, implementing multiple-beam and scanning beam concepts and monolithic semiconductor devices and techniques. The acquisition of accurate antenna performance measurements is a critical part of the advanced antenna research program and represents a substantial antenna measurement technology challenge, considering the special characteristics of future spacecraft communications antennas. Comparison of various antenna testing techniques and their relative advantages and disadvantages shows that the near-field approach is necessary to meet immediate and long-term testing requirements. The LeRC facilities, the 22 ft x 22 ft horizontal antenna boresight planar scanner and the 60 ft x 60 ft vertical antenna boresight plant scanner (with a 60 GHz frequency and D/lamdba = 3000 electrical size capabilities), will meet future program testing requirements.

  18. Self-Referencing Hartmann Test for Large-Aperture Telescopes

    NASA Technical Reports Server (NTRS)

    Korechoff, Robert P.; Oseas, Jeffrey M.

    2010-01-01

    A method is proposed for end-to-end, full aperture testing of large-aperture telescopes using an innovative variation of a Hartmann mask. This technique is practical for telescopes with primary mirrors tens of meters in diameter and of any design. Furthermore, it is applicable to the entire optical band (near IR, visible, ultraviolet), relatively insensitive to environmental perturbations, and is suitable for ambient laboratory as well as thermal-vacuum environments. The only restriction is that the telescope optical axis must be parallel to the local gravity vector during testing. The standard Hartmann test utilizes an array of pencil beams that are cut out of a well-corrected wavefront using a mask. The pencil beam array is expanded to fill the full aperture of the telescope. The detector plane of the telescope is translated back and forth along the optical axis in the vicinity of the nominal focal plane, and the centroid of each pencil beam image is recorded. Standard analytical techniques are then used to reconstruct the telescope wavefront from the centroid data. The expansion of the array of pencil beams is usually accomplished by double passing the beams through the telescope under test. However, this requires a well-corrected, autocollimation flat, the diameter or which is approximately equal to that of the telescope aperture. Thus, the standard Hartmann method does not scale well because of the difficulty and expense of building and mounting a well-corrected, large aperture flat. The innovation in the testing method proposed here is to replace the large aperture, well-corrected, monolithic autocollimation flat with an array of small-aperture mirrors. In addition to eliminating the need for a large optic, the surface figure requirement for the small mirrors is relaxed compared to that required of the large autocollimation flat. The key point that allows this method to work is that the small mirrors need to operate as a monolithic flat only with regard to tip/tilt and not piston because in collimated space piston has no effect on the image centroids. The problem of aligning the small mirrors in tip/tilt requires a two-part solution. First, each mirror is suspended from a two-axis gimbal. The orientation of the gimbal is maintained by gravity. Second, the mirror is aligned such that the mirror normal is parallel to gravity vector. This is accomplished interferometrically in a test fixture. Of course, the test fixture itself needs to be calibrated with respect to gravity.

  19. Large active mirror in aluminium

    NASA Astrophysics Data System (ADS)

    Leblanc, Jean-M.; Rozelot, Jean-Pierre

    1991-11-01

    The Large Active Mirrors in Aluminum Project (LAMA) is intended as a metallic alternative to the conventional glass mirrors. This alternative is to bring about definite improvements in terms of lower cost, shorter manufacturing, and reduced brittleness. Combined in a system approach that integrates design, development, and manufacturing of both the aluminum meniscus and its active support, the LAMA project is a technologically consistent product for astronomical and laser telescopes. Large size mirrors can be delivered, up to 8 m diameter. Recent progress in active optics makes possible control, as well as real-time adjustment, of a metallic mirror's deformations, especially those induced by temperature variations and/or aging. It also enables correction of whatever low-frequency surface waves escaped polishing. Besides, the manufacturing process to produce the aluminum segments together with the electron welding technique ensure the material's homogeneity. Quality of the surface condition will result from optimized implementation of the specific aluminum machining and polishing techniques. This paper highlights the existing aluminum realizations compared to glass mirrors, and gives the main results obtained during a feasibility demonstration phase, based on 8 m mirror requirements.

  20. A prototype automatic phase compensation module

    NASA Technical Reports Server (NTRS)

    Terry, John D.

    1992-01-01

    The growing demands for high gain and accurate satellite communication systems will necessitate the utilization of large reflector systems. One area of concern of reflector based satellite communication is large scale surface deformations due to thermal effects. These distortions, when present, can degrade the performance of the reflector system appreciable. This performance degradation is manifested by a decrease in peak gain, and increase in sidelobe level, and pointing errors. It is essential to compensate for these distortion effects and to maintain the required system performance in the operating space environment. For this reason the development of a technique to offset the degradation effects is highly desirable. Currently, most research is direct at developing better material for the reflector. These materials have a lower coefficient of linear expansion thereby reducing the surface errors. Alternatively, one can minimize the distortion effects of these large scale errors by adaptive phased array compensation. Adaptive phased array techniques have been studied extensively at NASA and elsewhere. Presented in this paper is a prototype automatic phase compensation module designed and built at NASA Lewis Research Center which is the first stage of development for an adaptive array compensation module.

  1. Distributed Synchronization Technique for OFDMA-Based Wireless Mesh Networks Using a Bio-Inspired Algorithm

    PubMed Central

    Kim, Mi Jeong; Maeng, Sung Joon; Cho, Yong Soo

    2015-01-01

    In this paper, a distributed synchronization technique based on a bio-inspired algorithm is proposed for an orthogonal frequency division multiple access (OFDMA)-based wireless mesh network (WMN) with a time difference of arrival. The proposed time- and frequency-synchronization technique uses only the signals received from the neighbor nodes, by considering the effect of the propagation delay between the nodes. It achieves a fast synchronization with a relatively low computational complexity because it is operated in a distributed manner, not requiring any feedback channel for the compensation of the propagation delays. In addition, a self-organization scheme that can be effectively used to construct 1-hop neighbor nodes is proposed for an OFDMA-based WMN with a large number of nodes. The performance of the proposed technique is evaluated with regard to the convergence property and synchronization success probability using a computer simulation. PMID:26225974

  2. Distributed Synchronization Technique for OFDMA-Based Wireless Mesh Networks Using a Bio-Inspired Algorithm.

    PubMed

    Kim, Mi Jeong; Maeng, Sung Joon; Cho, Yong Soo

    2015-07-28

    In this paper, a distributed synchronization technique based on a bio-inspired algorithm is proposed for an orthogonal frequency division multiple access (OFDMA)-based wireless mesh network (WMN) with a time difference of arrival. The proposed time- and frequency-synchronization technique uses only the signals received from the neighbor nodes, by considering the effect of the propagation delay between the nodes. It achieves a fast synchronization with a relatively low computational complexity because it is operated in a distributed manner, not requiring any feedback channel for the compensation of the propagation delays. In addition, a self-organization scheme that can be effectively used to construct 1-hop neighbor nodes is proposed for an OFDMA-based WMN with a large number of nodes. The performance of the proposed technique is evaluated with regard to the convergence property and synchronization success probability using a computer simulation.

  3. Amelioration de la precision d'un bras robotise pour une application d'ebavurage

    NASA Astrophysics Data System (ADS)

    Mailhot, David

    Process automation is a more and more referred solution when it comes to complex, tedious or even dangerous tasks for human. Flexibility, low cost and compactness make industrial robots very attractive for automation. Even if many developments have been made to enhance robot's performances, they still can not meet some industries requirements. For instance, aerospace industry requires very tight tolerances on a large variety of parts, which is not what robots were designed for at first. When it comes to robotic deburring, robot imprecision is a major problem that needs to be addressed before it can be implemented in production. This master's thesis explores different calibration techniques for robot's dimensions that could overcome the problem and make the robotic deburring application possible. Some calibration techniques that are easy to implement in production environment are simulated and compared. A calibration technique for tool's dimensions is simulated and implemented to evaluate its potential. The most efficient technique will be used within the application. Finally, the production environment and requirements are explained. The remaining imprecision will be compensated by the use of a force/torque sensor integrated with the robot's controller and by the use of a camera. Many tests are made to define the best parameters to use to deburr a specific feature on a chosen part. Concluding tests are shown and demonstrate the potential use of robotic deburring. Keywords: robotic calibration, robotic arm, robotic precision, robotic deburring

  4. Percutaneous Management of Accidentally Retained Foreign Bodies During Image-Guided Non-vascular Procedures: Novel Technique Using a Large-Bore Biopsy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cazzato, Roberto Luigi, E-mail: gigicazzato@hotmail.it; Garnon, Julien, E-mail: juleiengarnon@gmail.com; Ramamurthy, Nitin, E-mail: nitin-ramamurthy@hotmail.com

    ObjectiveTo describe a novel percutaneous image-guided technique using a large-bore biopsy system to retrieve foreign bodies (FBs) accidentally retained during non-vascular interventional procedures.Materials and MethodsBetween May 2013 and October 2015, five patients underwent percutaneous retrieval of five iatrogenic FBs, including a biopsy needle tip in the femoral head following osteoblastoma biopsy and radiofrequency ablation (RFA); a co-axial needle shaft within a giant desmoid tumour following cryoablation; and three post-vertebroplasty cement tails within paraspinal muscles. All FBs were retrieved immediately following original procedures under local or general anaesthesia, using combined computed tomography (CT) and fluoroscopic guidance. The basic technique involved positioningmore » a 6G trocar sleeve around the FB long axis and co-axially advancing an 8G biopsy needle to retrieve the FB within the biopsy core. Retrospective chart review facilitated analysis of procedures, FBs, technical success, and complications.ResultsMean FB size was 23 mm (range 8–74 mm). Four FBs were located within 10 mm of non-vascular significant anatomic structures. The basic technique was successful in 3 cases; 2 cases required technical modifications including using a stiff guide-wire to facilitate retrieval in the case of the post-cryoablation FB; and using the central mandrin of the 6G trocar to push a cement tract back into an augmented vertebra when initial retrieval failed. Overall technical success (FB retrieval or removal to non-hazardous location) was 100 %, with no complications.ConclusionPercutaneous image-guided retrieval of iatrogenic FBs using a large-bore biopsy system is a feasible, safe, effective, and versatile technique, with potential advantages over existing methods.« less

  5. Inverse halftoning via robust nonlinear filtering

    NASA Astrophysics Data System (ADS)

    Shen, Mei-Yin; Kuo, C.-C. Jay

    1999-10-01

    A new blind inverse halftoning algorithm based on a nonlinear filtering technique of low computational complexity and low memory requirement is proposed in this research. It is called blind since we do not require the knowledge of the halftone kernel. The proposed scheme performs nonlinear filtering in conjunction with edge enhancement to improve the quality of an inverse halftoned image. Distinct features of the proposed approach include: efficiently smoothing halftone patterns in large homogeneous areas, additional edge enhancement capability to recover the edge quality and an excellent PSNR performance with only local integer operations and a small memory buffer.

  6. Universal computer test stand (recommended computer test requirements). [for space shuttle computer evaluation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Techniques are considered which would be used to characterize areospace computers with the space shuttle application as end usage. The system level digital problems which have been encountered and documented are surveyed. From the large cross section of tests, an optimum set is recommended that has a high probability of discovering documented system level digital problems within laboratory environments. Defined is a baseline hardware, software system which is required as a laboratory tool to test aerospace computers. Hardware and software baselines and additions necessary to interface the UTE to aerospace computers for test purposes are outlined.

  7. Quantitative PCR high-resolution melting (qPCR-HRM) curve analysis, a new approach to simultaneously screen point mutations and large rearrangements: application to MLH1 germline mutations in Lynch syndrome.

    PubMed

    Rouleau, Etienne; Lefol, Cédrick; Bourdon, Violaine; Coulet, Florence; Noguchi, Tetsuro; Soubrier, Florent; Bièche, Ivan; Olschwang, Sylviane; Sobol, Hagay; Lidereau, Rosette

    2009-06-01

    Several techniques have been developed to screen mismatch repair (MMR) genes for deleterious mutations. Until now, two different techniques were required to screen for both point mutations and large rearrangements. For the first time, we propose a new approach, called "quantitative PCR (qPCR) high-resolution melting (HRM) curve analysis (qPCR-HRM)," which combines qPCR and HRM to obtain a rapid and cost-effective method suitable for testing a large series of samples. We designed PCR amplicons to scan the MLH1 gene using qPCR HRM. Seventy-six patients were fully scanned in replicate, including 14 wild-type patients and 62 patients with known mutations (57 point mutations and five rearrangements). To validate the detected mutations, we used sequencing and/or hybridization on a dedicated MLH1 array-comparative genomic hybridization (array-CGH). All point mutations and rearrangements detected by denaturing high-performance liquid chromatography (dHPLC)+multiplex ligation-dependent probe amplification (MLPA) were successfully detected by qPCR HRM. Three large rearrangements were characterized with the dedicated MLH1 array-CGH. One variant was detected with qPCR HRM in a wild-type patient and was located within the reverse primer. One variant was not detected with qPCR HRM or with dHPLC due to its proximity to a T-stretch. With qPCR HRM, prescreening for point mutations and large rearrangements are performed in one tube and in one step with a single machine, without the need for any automated sequencer in the prescreening process. In replicate, its reagent cost, sensitivity, and specificity are comparable to those of dHPLC+MLPA techniques. However, qPCR HRM outperformed the other techniques in terms of its rapidity and amount of data provided.

  8. Liposome Technology for Industrial Purposes

    PubMed Central

    Wagner, Andreas; Vorauer-Uhl, Karola

    2011-01-01

    Liposomes, spherical vesicles consisting of one or more phospholipid bilayers, were first described in the mid 60s by Bangham and coworkers. Since then, liposomes have made their way to the market. Today, numerous lab scale but only a few large-scale techniques are available. However, a lot of these methods have serious limitations in terms of entrapment of sensitive molecules due to their exposure to mechanical and/or chemical stress. This paper summarizes exclusively scalable techniques and focuses on strengths, respectively, limitations in respect to industrial applicability. An additional point of view was taken to regulatory requirements concerning liposomal drug formulations based on FDA and EMEA documents. PMID:21490754

  9. Large scale GW calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Govoni, Marco; Galli, Giulia

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm, which takes advantage of separable expressions of both the single particle Green’s function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. The newly developed technique was applied to GW calculations of systems of unprecedented size, including water/semiconductor interfacesmore » with thousands of electrons.« less

  10. Large Scale GW Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Govoni, Marco; Galli, Giulia

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm which takes advantage of separable expressions of both the single particle Green's function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. We applied the newly developed technique to GW calculations of systems of unprecedented size, including water/semiconductor interfacesmore » with thousands of electrons.« less

  11. Large scale GW calculations

    DOE PAGES

    Govoni, Marco; Galli, Giulia

    2015-01-12

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm, which takes advantage of separable expressions of both the single particle Green’s function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. The newly developed technique was applied to GW calculations of systems of unprecedented size, including water/semiconductor interfacesmore » with thousands of electrons.« less

  12. Least-squares sequential parameter and state estimation for large space structures

    NASA Technical Reports Server (NTRS)

    Thau, F. E.; Eliazov, T.; Montgomery, R. C.

    1982-01-01

    This paper presents the formulation of simultaneous state and parameter estimation problems for flexible structures in terms of least-squares minimization problems. The approach combines an on-line order determination algorithm, with least-squares algorithms for finding estimates of modal approximation functions, modal amplitudes, and modal parameters. The approach combines previous results on separable nonlinear least squares estimation with a regression analysis formulation of the state estimation problem. The technique makes use of sequential Householder transformations. This allows for sequential accumulation of matrices required during the identification process. The technique is used to identify the modal prameters of a flexible beam.

  13. A technique for designing active control systems for astronomical telescope mirrors

    NASA Technical Reports Server (NTRS)

    Howell, W. E.; Creedon, J. F.

    1973-01-01

    The problem of designing a control system to achieve and maintain the required surface accuracy of the primary mirror of a large space telescope was considered. Control over the mirror surface is obtained through the application of a corrective force distribution by actuators located on the rear surface of the mirror. The design procedure is an extension of a modal control technique developed for distributed parameter plants with known eigenfunctions to include plants whose eigenfunctions must be approximated by numerical techniques. Instructions are given for constructing the mathematical model of the system, and a design procedure is developed for use with typical numerical data in selecting the number and location of the actuators. Examples of actuator patterns and their effect on various errors are given.

  14. Opportunities for Live Cell FT-Infrared Imaging: Macromolecule Identification with 2D and 3D Localization

    PubMed Central

    Mattson, Eric C.; Aboualizadeh, Ebrahim; Barabas, Marie E.; Stucky, Cheryl L.; Hirschmugl, Carol J.

    2013-01-01

    Infrared (IR) spectromicroscopy, or chemical imaging, is an evolving technique that is poised to make significant contributions in the fields of biology and medicine. Recent developments in sources, detectors, measurement techniques and speciman holders have now made diffraction-limited Fourier transform infrared (FTIR) imaging of cellular chemistry in living cells a reality. The availability of bright, broadband IR sources and large area, pixelated detectors facilitate live cell imaging, which requires rapid measurements using non-destructive probes. In this work, we review advances in the field of FTIR spectromicroscopy that have contributed to live-cell two and three-dimensional IR imaging, and discuss several key examples that highlight the utility of this technique for studying the structure and chemistry of living cells. PMID:24256815

  15. Program Correctness, Verification and Testing for Exascale (Corvette)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Koushik; Iancu, Costin; Demmel, James W

    The goal of this project is to provide tools to assess the correctness of parallel programs written using hybrid parallelism. There is a dire lack of both theoretical and engineering know-how in the area of finding bugs in hybrid or large scale parallel programs, which our research aims to change. In the project we have demonstrated novel approaches in several areas: 1. Low overhead automated and precise detection of concurrency bugs at scale. 2. Using low overhead bug detection tools to guide speculative program transformations for performance. 3. Techniques to reduce the concurrency required to reproduce a bug using partialmore » program restart/replay. 4. Techniques to provide reproducible execution of floating point programs. 5. Techniques for tuning the floating point precision used in codes.« less

  16. Location estimation in wireless sensor networks using spring-relaxation technique.

    PubMed

    Zhang, Qing; Foh, Chuan Heng; Seet, Boon-Chong; Fong, A C M

    2010-01-01

    Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN). Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS) is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.

  17. Biomedical microfluidic devices by using low-cost fabrication techniques: A review.

    PubMed

    Faustino, Vera; Catarino, Susana O; Lima, Rui; Minas, Graça

    2016-07-26

    One of the most popular methods to fabricate biomedical microfluidic devices is by using a soft-lithography technique. However, the fabrication of the moulds to produce microfluidic devices, such as SU-8 moulds, usually requires a cleanroom environment that can be quite costly. Therefore, many efforts have been made to develop low-cost alternatives for the fabrication of microstructures, avoiding the use of cleanroom facilities. Recently, low-cost techniques without cleanroom facilities that feature aspect ratios more than 20, for fabricating those SU-8 moulds have been gaining popularity among biomedical research community. In those techniques, Ultraviolet (UV) exposure equipment, commonly used in the Printed Circuit Board (PCB) industry, replaces the more expensive and less available Mask Aligner that has been used in the last 15 years for SU-8 patterning. Alternatively, non-lithographic low-cost techniques, due to their ability for large-scale production, have increased the interest of the industrial and research community to develop simple, rapid and low-cost microfluidic structures. These alternative techniques include Print and Peel methods (PAP), laserjet, solid ink, cutting plotters or micromilling, that use equipment available in almost all laboratories and offices. An example is the xurography technique that uses a cutting plotter machine and adhesive vinyl films to generate the master moulds to fabricate microfluidic channels. In this review, we present a selection of the most recent lithographic and non-lithographic low-cost techniques to fabricate microfluidic structures, focused on the features and limitations of each technique. Only microfabrication methods that do not require the use of cleanrooms are considered. Additionally, potential applications of these microfluidic devices in biomedical engineering are presented with some illustrative examples. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Torsion testing for general constitutive relations: Gilles Canova's Master's Thesis

    NASA Astrophysics Data System (ADS)

    Kocks, U. F.; Stout, M. G.

    1999-09-01

    Torsion testing is useful but cumbersome: useful as a technique to determine large-strain plastic behaviour in a plane-strain mode; cumbersome because the testing of a solid rod provides reliable data in only the simplest of circumstances, for example, when there is no strain hardening or rate sensitivity. The testing of short thin-walled tubes is widely regarded as the best current technique to determine general constitutive behaviour; a drawback is the requirement for large specimen blanks (say, 3 cm cube) and the complex machining procedure. Gilles Canova proposed an alternative - the testing of a series of solid rods of differing diameters, proving the principle by using 10 such rods for one test. We have undertaken a similar series of tests on just four rods plus one short tube for comparison. We found the results from the two types of torsion test in excellent agreement; however, it was not a critical test, inasmuch as the rate sensitivity of the pure copper used was too small. Had it been greater (as, for example, in aluminium and at higher temperature), the evaluation of perhaps six rods would have provided the constitutive response with regard to both hardening and rate sensitivity at the same time - which would require two short-tube tests (plus duplications). The main drawback of the multiple-rod test is that it requires considerable numerical effort in the evaluation. A closer integration with modelling of the constitutive behaviour would be helpful.

  19. Design of a practical model-observer-based image quality assessment method for CT imaging systems

    NASA Astrophysics Data System (ADS)

    Tseng, Hsin-Wu; Fan, Jiahua; Cao, Guangzhi; Kupinski, Matthew A.; Sainath, Paavana

    2014-03-01

    The channelized Hotelling observer (CHO) is a powerful method for quantitative image quality evaluations of CT systems and their image reconstruction algorithms. It has recently been used to validate the dose reduction capability of iterative image-reconstruction algorithms implemented on CT imaging systems. The use of the CHO for routine and frequent system evaluations is desirable both for quality assurance evaluations as well as further system optimizations. The use of channels substantially reduces the amount of data required to achieve accurate estimates of observer performance. However, the number of scans required is still large even with the use of channels. This work explores different data reduction schemes and designs a new approach that requires only a few CT scans of a phantom. For this work, the leave-one-out likelihood (LOOL) method developed by Hoffbeck and Landgrebe is studied as an efficient method of estimating the covariance matrices needed to compute CHO performance. Three different kinds of approaches are included in the study: a conventional CHO estimation technique with a large sample size, a conventional technique with fewer samples, and the new LOOL-based approach with fewer samples. The mean value and standard deviation of area under ROC curve (AUC) is estimated by shuffle method. Both simulation and real data results indicate that an 80% data reduction can be achieved without loss of accuracy. This data reduction makes the proposed approach a practical tool for routine CT system assessment.

  20. A fast time-difference inverse solver for 3D EIT with application to lung imaging.

    PubMed

    Javaherian, Ashkan; Soleimani, Manuchehr; Moeller, Knut

    2016-08-01

    A class of sparse optimization techniques that require solely matrix-vector products, rather than an explicit access to the forward matrix and its transpose, has been paid much attention in the recent decade for dealing with large-scale inverse problems. This study tailors application of the so-called Gradient Projection for Sparse Reconstruction (GPSR) to large-scale time-difference three-dimensional electrical impedance tomography (3D EIT). 3D EIT typically suffers from the need for a large number of voxels to cover the whole domain, so its application to real-time imaging, for example monitoring of lung function, remains scarce since the large number of degrees of freedom of the problem extremely increases storage space and reconstruction time. This study shows the great potential of the GPSR for large-size time-difference 3D EIT. Further studies are needed to improve its accuracy for imaging small-size anomalies.

  1. Evaluating Application Resilience with XRay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Sui; Bronevetsky, Greg; Li, Bin

    2015-05-07

    The rising count and shrinking feature size of transistors within modern computers is making them increasingly vulnerable to various types of soft faults. This problem is especially acute in high-performance computing (HPC) systems used for scientific computing, because these systems include many thousands of compute cores and nodes, all of which may be utilized in a single large-scale run. The increasing vulnerability of HPC applications to errors induced by soft faults is motivating extensive work on techniques to make these applications more resiilent to such faults, ranging from generic techniques such as replication or checkpoint/restart to algorithmspecific error detection andmore » tolerance techniques. Effective use of such techniques requires a detailed understanding of how a given application is affected by soft faults to ensure that (i) efforts to improve application resilience are spent in the code regions most vulnerable to faults and (ii) the appropriate resilience technique is applied to each code region. This paper presents XRay, a tool to view the application vulnerability to soft errors, and illustrates how XRay can be used in the context of a representative application. In addition to providing actionable insights into application behavior XRay automatically selects the number of fault injection experiments required to provide an informative view of application behavior, ensuring that the information is statistically well-grounded without performing unnecessary experiments.« less

  2. Pigeon chest: comparative analysis of surgical techniques in minimal access repair of pectus carinatum (MARPC).

    PubMed

    Muntean, Ancuta; Stoica, Ionica; Saxena, Amulya K

    2018-02-01

    After minimally invasive repair for pectus excavatum (MIRPE), similar procedures for pectus carinatum were developed. This study aimed to analyse the various published techniques of minimal access repair for pectus carinatum (MARPC) and compare the outcomes. Literature was reviewed on PubMed with the terms "pectus carinatum", "minimal access repair", "thoracoscopy" and "children". Twelve MARPC techniques that included 13 articles and 140 patients with mean age 15.46 years met the inclusion criteria. Success rate of corrections was n = 125, about 89% in cumulative reports, with seven articles reporting 100%. The complication rate was 39.28%. Since the pectus bar is placed over the sternum and has a large contact area, skin irritation was the most frequent morbidity (n = 20, 14.28%). However, within the complication group (n = 55), wire breakage (n = 21, 38.18%) and bar displacement (n = 10, 18.18%) were the most frequent complications. Twenty-two (15.71%) patients required a second procedure. Recurrences have been reported in four of twelve techniques. There were no lethal outcomes. MARPC techniques are not standardized, as MIRPE are, so comparative analysis is difficult as the only common denominator is minimal access. Surgical morbidity is high in MARPC and affects > 2/3rd patients with about 15% requiring surgery for complication management.

  3. Percutaneous fetoscopic closure of large open spina bifida using a bilaminar skin substitute.

    PubMed

    Lapa Pedreira, Denise A; Acacio, Gregório L; Gonçalves, Rodrigo T; Sá, Renato Augusto M; Brandt, Reynaldo A; Chmait, Ramen; Kontopoulos, Eftichia; Quintero, Ruben A

    2018-01-04

    We have previously described our percutaneous fetoscopic technique for the treatment of open spina bifida (OSB). However, approximately 20-30% of OSB defects are too large to allow primary skin closure. We hereby describe a modification of our standard technique using a bilaminar skin substitute to allow closure of such large spinal defects. The aim of this study was to report our clinical experience with the use of a bilaminar skin substitute and a percutaneous fetoscopic technique for the prenatal closure of large spina bifida defects. Surgeries were performed between 24.0 and 28.9 gestational weeks under general anesthesia, using an entirely percutaneous fetoscopic approach with partial CO2 insufflation of the uterine cavity, as previously described. If there was enough skin to be sutured in the midline, only a biocellulose patch was placed over the placode. In cases where skin approximation was not possible, a bilaminar skin substitute (two layers: one silicone and one dermal matrix) was placed over the biocellulose. The surgical site was assessed at birth, and long-term follow-up was performed. Forty-seven consecutive fetuses underwent percutaneous fetoscopic OSB repair. Premature preterm rupture of membranes (PPROM) occurred in 38 (84%), and the mean gestational age at delivery was 32,8 + 2.5 weeks. A bilaminar skin substitute was required in 13 (29%), of which 5 was associated with myeloschisis. In all cases the skin substitute was found at the surgical site, at birth. In 3 (15%) of these cases, postnatal additional repair was needed. In the other 10 cases, the silicone layer detached spontaneously from the dermal matrix (average 25 days after birth), and the lesion healed by secondary-intention. Operating time was significantly longer in cases requiring the bilaminar skin substitute (additional 42 minutes). The subgroup with bilaminar skin substitute had similar PPROM rate and delivery gestational age compared to the one patch group. Complete reversal of hindbrain herniation occurred in 68% of the one patch and in 33% (p < 0.05) of the two patches group. In 4 cases there was no reversal and 3 of them were myeloschisis cases. Large OSB defects may be successfully treated in utero using a bilaminar skin substitute over a biocellulose patch through an entirely percutaneous approach. Although the operating time is longer, surgical outcomes are similar to cases closed primarily. Myeloschisis seems to have a worse prognosis then myelomeningocele cases. PPROM and preterm birth continue to be a challenge. Further experience is needed to assess the risks and benefits of this technique for management of large OSB defects. This article is protected by copyright. All rights reserved.

  4. Status of Thermal NDT of Space Shuttle Materials at NASA

    NASA Technical Reports Server (NTRS)

    Cramer, K. Elliott; Winfree, William P.; Hodges, Kenneth; Koshti, Ajay; Ryan, Daniel; Reinhardt, Walter W.

    2006-01-01

    Since the Space Shuttle Columbia accident, NASA has focused on improving advanced nondestructive evaluation (NDE) techniques for the Reinforced Carbon-Carbon (RCC) panels that comprise the orbiter's wing leading edge and nose cap. Various nondestructive inspection techniques have been used in the examination of the RCC, but thermography has emerged as an effective inspection alternative to more traditional methods. Thermography is a non-contact inspection method as compared to ultrasonic techniques which typically require the use of a coupling medium between the transducer and material. Like radiographic techniques, thermography can inspect large areas, but has the advantage of minimal safety concerns and the ability for single-sided measurements. Details of the analysis technique that has been developed to allow insitu inspection of a majority of shuttle RCC components is discussed. Additionally, validation testing, performed to quantify the performance of the system, will be discussed. Finally, the results of applying this technology to the Space Shuttle Discovery after its return from the STS-114 mission in July 2005 are discussed.

  5. Status of Thermal NDT of Space Shuttle Materials at NASA

    NASA Technical Reports Server (NTRS)

    Cramer, K. Elliott; Winfree, William P.; Hodges, Kenneth; Koshti, Ajay; Ryan, Daniel; Reinhardt, Walter W.

    2007-01-01

    Since the Space Shuttle Columbia accident, NASA has focused on improving advanced NDE techniques for the Reinforced Carbon-Carbon (RCC) panels that comprise the orbiter s wing leading edge and nose cap. Various nondestructive inspection techniques have been used in the examination of the RCC, but thermography has emerged as an effective inspection alternative to more traditional methods. Thermography is a non-contact inspection method as compared to ultrasonic techniques which typically require the use of a coupling medium between the transducer and material. Like radiographic techniques, thermography can inspect large areas, but has the advantage of minimal safety concerns and the ability for single-sided measurements. Details of the analysis technique that has been developed to allow insitu inspection of a majority of shuttle RCC components is discussed. Additionally, validation testing, performed to quantify the performance of the system, will be discussed. Finally, the results of applying this technology to the Space Shuttle Discovery after its return from the STS-114 mission in July 2005 are discussed.

  6. Status of Thermal NDT of Space Shuttle Materials at NASA

    NASA Technical Reports Server (NTRS)

    Cramer, K. Elliott; Winfree, William P.; Hodges, Kenneth; Koshti, Ajay; Ryan, Daniel; Rweinhardt, Walter W.

    2006-01-01

    Since the Space Shuttle Columbia accident, NASA has focused on improving advanced NDE techniques for the Reinforced Carbon-Carbon (RCC) panels that comprise the orbiter's wing leading edge and nose cap. Various nondestructive inspection techniques have been used in the examination of the RCC, but thermography has emerged as an effective inspection alternative to more traditional methods. Thermography is a non-contact inspection method as compared to ultrasonic techniques which typically require the use of a coupling medium between the transducer and material. Like radiographic techniques, thermography can inspect large areas, but has the advantage of minimal safety concerns and the ability for single-sided measurements. Details of the analysis technique that has been developed to allow insitu inspection of a majority of shuttle RCC components is discussed. Additionally, validation testing, performed to quantify the performance of the system, will be discussed. Finally, the results of applying this technology to the Space Shuttle Discovery after its return from the STS-114 mission in July 2005 are discussed.

  7. Techniques for hot structures testing

    NASA Technical Reports Server (NTRS)

    Deangelis, V. Michael; Fields, Roger A.

    1990-01-01

    Hot structures testing have been going on since the early 1960's beginning with the Mach 6, X-15 airplane. Early hot structures test programs at NASA-Ames-Dryden focused on operational testing required to support the X-15 flight test program, and early hot structures research projects focused on developing lab test techniques to simulate flight thermal profiles. More recent efforts involved numerous large and small hot structures test programs that served to develop test methods and measurement techniques to provide data that promoted the correlation of test data with results from analytical codes. In Nov. 1988 a workshop was sponsored that focused on the correlation of hot structures test data with analysis. Limited material is drawn from the workshop and a more formal documentation is provided of topics that focus on hot structures test techniques used at NASA-Ames-Dryden. Topics covered include the data acquisition and control of testing, the quartz lamp heater systems, current strain and temperature sensors, and hot structures test techniques used to simulate the flight thermal environment in the lab.

  8. Lip reposition surgery: A new call in periodontics

    PubMed Central

    Sheth, Tejal; Shah, Shilpi; Shah, Mihir; Shah, Ekta

    2013-01-01

    “Gummy smile” is a major concern for a large number of patients visiting the dentist. Esthetics has now become an integral part of periodontal treatment plan. This article presents a case of a gummy smile in which esthetic correction was achieved through periodontal plastic surgical procedure wherein a 10-12 mm of partial-thickness flap was dissected apical to mucogingival junction followed by approximation of the flaps. This novel technique gave excellent post-operative results with enormous patient satisfaction. This surgical chair-side procedure being one of its kinds with outstanding results is very rarely performed by Periodontists. Thus, a lot of clinical work and literature review with this surgical technique is required. To make it a routine surgical procedure this technique can be incorporated as a part of periodontal plastic surgery in the text. Hence, we have put forward experience of a case with critical analysis of the surgical technique including the limitations of the technique. PMID:24124310

  9. Directed Incremental Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Person, Suzette; Yang, Guowei; Rungta, Neha; Khurshid, Sarfraz

    2011-01-01

    The last few years have seen a resurgence of interest in the use of symbolic execution -- a program analysis technique developed more than three decades ago to analyze program execution paths. Scaling symbolic execution and other path-sensitive analysis techniques to large systems remains challenging despite recent algorithmic and technological advances. An alternative to solving the problem of scalability is to reduce the scope of the analysis. One approach that is widely studied in the context of regression analysis is to analyze the differences between two related program versions. While such an approach is intuitive in theory, finding efficient and precise ways to identify program differences, and characterize their effects on how the program executes has proved challenging in practice. In this paper, we present Directed Incremental Symbolic Execution (DiSE), a novel technique for detecting and characterizing the effects of program changes. The novelty of DiSE is to combine the efficiencies of static analysis techniques to compute program difference information with the precision of symbolic execution to explore program execution paths and generate path conditions affected by the differences. DiSE is a complementary technique to other reduction or bounding techniques developed to improve symbolic execution. Furthermore, DiSE does not require analysis results to be carried forward as the software evolves -- only the source code for two related program versions is required. A case-study of our implementation of DiSE illustrates its effectiveness at detecting and characterizing the effects of program changes.

  10. A highly efficient bead extraction technique with low bead number for digital microfluidic immunoassay

    PubMed Central

    Tsai, Po-Yen; Lee, I-Chin; Hsu, Hsin-Yun; Huang, Hong-Yuan; Fan, Shih-Kang; Liu, Cheng-Hsien

    2016-01-01

    Here, we describe a technique to manipulate a low number of beads to achieve high washing efficiency with zero bead loss in the washing process of a digital microfluidic (DMF) immunoassay. Previously, two magnetic bead extraction methods were reported in the DMF platform: (1) single-side electrowetting method and (2) double-side electrowetting method. The first approach could provide high washing efficiency, but it required a large number of beads. The second approach could reduce the required number of beads, but it was inefficient where multiple washes were required. More importantly, bead loss during the washing process was unavoidable in both methods. Here, an improved double-side electrowetting method is proposed for bead extraction by utilizing a series of unequal electrodes. It is shown that, with proper electrode size ratio, only one wash step is required to achieve 98% washing rate without any bead loss at bead number less than 100 in a droplet. It allows using only about 25 magnetic beads in DMF immunoassay to increase the number of captured analytes on each bead effectively. In our human soluble tumor necrosis factor receptor I (sTNF-RI) model immunoassay, the experimental results show that, comparing to our previous results without using the proposed bead extraction technique, the immunoassay with low bead number significantly enhances the fluorescence signal to provide a better limit of detection (3.14 pg/ml) with smaller reagent volumes (200 nl) and shorter analysis time (<1 h). This improved bead extraction technique not only can be used in the DMF immunoassay but also has great potential to be used in any other bead-based DMF systems for different applications. PMID:26858807

  11. Combining eastern and western practices for safe and effective endoscopic resection of large complex colorectal lesions.

    PubMed

    Emmanuel, Andrew; Gulati, Shraddha; Burt, Margaret; Hayee, Bu'Hussain; Haji, Amyn

    2018-05-01

    Endoscopic resection of large colorectal polyps is well established. However, significant differences in technique exist between eastern and western interventional endoscopists. We report the results of endoscopic resection of large complex colorectal lesions from a specialist unit that combines eastern and western techniques for assessment and resection. Endoscopic resections of colorectal lesions of at least 2 cm were included. Lesions were assessed using magnification chromoendoscopy supplemented by colonoscopic ultrasound in selected cases. A lesion-specific approach to resection with endoscopic mucosal resection or endoscopic submucosal dissection (ESD) was used. Surveillance endoscopy was performed at 3 (SC1) and 12 (SC2) months. Four hundred and sixty-six large (≥20 mm) colorectal lesions (mean size 54.8 mm) were resected. Three hundread and fifty-six were resected using endoscopic mucosal resection and 110 by ESD or hybrid ESD. Fifty-one percent of lesions had been subjected to previous failed attempts at resection or heavy manipulation (≥6 biopsies). Nevertheless, endoscopic resection was deemed successful after an initial attempt in 98%. Recurrence occurred in 15% and could be treated with endoscopic resection in most. Only two patients required surgery for perforation. Nine patients had postprocedure bleeding; only two required endoscopic clips. Ninety-six percent of patients without invasive cancer were free from recurrence and had avoided surgery at last follow-up. Combining eastern and western practices for assessment and resection results in safe and effective organ-conserving treatment of complex colorectal lesions. Accurate assessment before and after resection using magnification chromoendoscopy and a lesion-specific approach to resection, incorporating ESD where appropriate, are important factors in achieving these results.

  12. Developing Magnetorheological Finishing (MRF) Technology for the Manufacture of Large-Aperture Optics in Megajoule Class Laser Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menapace, J A

    2010-10-27

    Over the last eight years we have been developing advanced MRF tools and techniques to manufacture meter-scale optics for use in Megajoule class laser systems. These systems call for optics having unique characteristics that can complicate their fabrication using conventional polishing methods. First, exposure to the high-power nanosecond and sub-nanosecond pulsed laser environment in the infrared (>27 J/cm{sup 2} at 1053 nm), visible (>18 J/cm{sup 2} at 527 nm), and ultraviolet (>10 J/cm{sup 2} at 351 nm) demands ultra-precise control of optical figure and finish to avoid intensity modulation and scatter that can result in damage to the optics chainmore » or system hardware. Second, the optics must be super-polished and virtually free of surface and subsurface flaws that can limit optic lifetime through laser-induced damage initiation and growth at the flaw sites, particularly at 351 nm. Lastly, ultra-precise optics for beam conditioning are required to control laser beam quality. These optics contain customized surface topographical structures that cannot be made using traditional fabrication processes. In this review, we will present the development and implementation of large-aperture MRF tools and techniques specifically designed to meet the demanding optical performance challenges required in large-aperture high-power laser systems. In particular, we will discuss the advances made by using MRF technology to expose and remove surface and subsurface flaws in optics during final polishing to yield optics with improve laser damage resistance, the novel application of MRF deterministic polishing to imprint complex topographical information and wavefront correction patterns onto optical surfaces, and our efforts to advance the technology to manufacture large-aperture damage resistant optics.« less

  13. Retroperitoneal laparoscopic nephron sparing surgery for large renal angiomyolipoma: Our technique and experience. A case series of 41 patients.

    PubMed

    Liu, Xin; Ma, Xin; Liu, Qiming; Huang, Qingbo; Li, Xintao; Wang, Baojun; Li, Hongzhao; Zhang, Xu

    2018-06-01

    To introduce a 'kidney priority' strategy in treating large renal angiomyolipoma (RAML) with retroperitoneal laparoscopic nephron sparing surgery (RLNSS). From 2010 to 2017, 41 patients with large RAML underwent RLNSS. Distinguished from the standard practice, the kidney was preferentially mobilized and separated from the RAML. Subsequently, it was reconstructed. Finally, the RAML was resected from the perinephric fat. The perioperative variables, surgical technique and complications were reviewed. Patients were followed up with ultrasonography and computed tomography. RLNSS was successfully performed in 35 patients with four conversions to open surgery and two conversions to nephrectomy, respectively. Eight patients required an intraoperative blood transfusion. Seven patients experienced postoperative complications, including one wound infection, one urinary tract infection, one pneumonia, one urinary fistula and three hemorrhage. The median operation time was 167min (range, 95-285min), the median warm ischemia time was 21 min (range, 0-40 min), and the median estimated blood loss was 200 ml (range, 30-2500 ml). The median postoperative stay was 6.5 days (range, 3-11 days). Angiomyolipoma was confirmed pathologically in all patients. Median serum creatine increased after surgery, from 0.7 mg/dl (range, 0.4-1.1 mg/dl) preoperatively to 0.8 mg/dL (range, 0.5-1.4 mg/dl) postoperatively (P = 0.016). No patient required dialysis, and no recurrence was observed after a median follow-up of 35 months (range, 3-85 months). RLNSS is a safe, feasible, effective and minimally invasive procedure to manage large RAML in selected patients. Copyright © 2018 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  14. The Application of Infrared Thermographic Inspection Techniques to the Space Shuttle Thermal Protection System

    NASA Technical Reports Server (NTRS)

    Cramer, K. E.; Winfree, W. P.

    2005-01-01

    The Nondestructive Evaluation Sciences Branch at NASA s Langley Research Center has been actively involved in the development of thermographic inspection techniques for more than 15 years. Since the Space Shuttle Columbia accident, NASA has focused on the improvement of advanced NDE techniques for the Reinforced Carbon-Carbon (RCC) panels that comprise the orbiter s wing leading edge. Various nondestructive inspection techniques have been used in the examination of the RCC, but thermography has emerged as an effective inspection alternative to more traditional methods. Thermography is a non-contact inspection method as compared to ultrasonic techniques which typically require the use of a coupling medium between the transducer and material. Like radiographic techniques, thermography can be used to inspect large areas, but has the advantage of minimal safety concerns and the ability for single-sided measurements. Principal Component Analysis (PCA) has been shown effective for reducing thermographic NDE data. A typical implementation of PCA is when the eigenvectors are generated from the data set being analyzed. Although it is a powerful tool for enhancing the visibility of defects in thermal data, PCA can be computationally intense and time consuming when applied to the large data sets typical in thermography. Additionally, PCA can experience problems when very large defects are present (defects that dominate the field-of-view), since the calculation of the eigenvectors is now governed by the presence of the defect, not the "good" material. To increase the processing speed and to minimize the negative effects of large defects, an alternative method of PCA is being pursued where a fixed set of eigenvectors, generated from an analytic model of the thermal response of the material under examination, is used to process the thermal data from the RCC materials. Details of a one-dimensional analytic model and a two-dimensional finite-element model will be presented. An overview of the PCA process as well as a quantitative signal-to-noise comparison of the results of performing both embodiments of PCA on thermographic data from various RCC specimens will be shown. Finally, a number of different applications of this technology to various RCC components will be presented.

  15. Application of Multiregressive Linear Models, Dynamic Kriging Models and Neural Network Models to Predictive Maintenance of Hydroelectric Power Systems

    NASA Astrophysics Data System (ADS)

    Lucifredi, A.; Mazzieri, C.; Rossi, M.

    2000-05-01

    Since the operational conditions of a hydroelectric unit can vary within a wide range, the monitoring system must be able to distinguish between the variations of the monitored variable caused by variations of the operation conditions and those due to arising and progressing of failures and misoperations. The paper aims to identify the best technique to be adopted for the monitoring system. Three different methods have been implemented and compared. Two of them use statistical techniques: the first, the linear multiple regression, expresses the monitored variable as a linear function of the process parameters (independent variables), while the second, the dynamic kriging technique, is a modified technique of multiple linear regression representing the monitored variable as a linear combination of the process variables in such a way as to minimize the variance of the estimate error. The third is based on neural networks. Tests have shown that the monitoring system based on the kriging technique is not affected by some problems common to the other two models e.g. the requirement of a large amount of data for their tuning, both for training the neural network and defining the optimum plane for the multiple regression, not only in the system starting phase but also after a trivial operation of maintenance involving the substitution of machinery components having a direct impact on the observed variable. Or, in addition, the necessity of different models to describe in a satisfactory way the different ranges of operation of the plant. The monitoring system based on the kriging statistical technique overrides the previous difficulties: it does not require a large amount of data to be tuned and is immediately operational: given two points, the third can be immediately estimated; in addition the model follows the system without adapting itself to it. The results of the experimentation performed seem to indicate that a model based on a neural network or on a linear multiple regression is not optimal, and that a different approach is necessary to reduce the amount of work during the learning phase using, when available, all the information stored during the initial phase of the plant to build the reference baseline, elaborating, if it is the case, the raw information available. A mixed approach using the kriging statistical technique and neural network techniques could optimise the result.

  16. Coarse-coded higher-order neural networks for PSRI object recognition. [position, scale, and rotation invariant

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Reid, Max B.

    1993-01-01

    A higher-order neural network (HONN) can be designed to be invariant to changes in scale, translation, and inplane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Consequently, fewer training passes and a smaller training set are required to learn to distinguish between objects. The size of the input field is limited, however, because of the memory required for the large number of interconnections in a fully connected HONN. By coarse coding the input image, the input field size can be increased to allow the larger input scenes required for practical object recognition problems. We describe a coarse coding technique and present simulation results illustrating its usefulness and its limitations. Our simulations show that a third-order neural network can be trained to distinguish between two objects in a 4096 x 4096 pixel input field independent of transformations in translation, in-plane rotation, and scale in less than ten passes through the training set. Furthermore, we empirically determine the limits of the coarse coding technique in the object recognition domain.

  17. Discrete square root smoothing.

    NASA Technical Reports Server (NTRS)

    Kaminski, P. G.; Bryson, A. E., Jr.

    1972-01-01

    The basic techniques applied in the square root least squares and square root filtering solutions are applied to the smoothing problem. Both conventional and square root solutions are obtained by computing the filtered solutions, then modifying the results to include the effect of all measurements. A comparison of computation requirements indicates that the square root information smoother (SRIS) is more efficient than conventional solutions in a large class of fixed interval smoothing problems.

  18. Spot-shadowing optimization to mitigate damage growth in a high-energy-laser amplifier chain.

    PubMed

    Bahk, Seung-Whan; Zuegel, Jonathan D; Fienup, James R; Widmayer, C Clay; Heebner, John

    2008-12-10

    A spot-shadowing technique to mitigate damage growth in a high-energy laser is studied. Its goal is to minimize the energy loss and undesirable hot spots in intermediate planes of the laser. A nonlinear optimization algorithm solves for the complex fields required to mitigate damage growth in the National Ignition Facility amplifier chain. The method is generally applicable to any large fusion laser.

  19. Interposition vein graft for giant coronary aneurysm repair

    NASA Technical Reports Server (NTRS)

    Firstenberg, M. S.; Azoury, F.; Lytle, B. W.; Thomas, J. D.

    2000-01-01

    Coronary aneurysms in adults are rare. Surgical treatment is often concomitant to treating obstructing coronary lesions. However, the ideal treatment strategy is poorly defined. We present a case of successful treatment of a large coronary artery aneurysm with a reverse saphenous interposition vein graft. This modality offers important benefits over other current surgical and percutaneous techniques and should be considered as an option for patients requiring treatment for coronary aneurysms.

  20. Simplified microprocessor design for VLSI control applications

    NASA Technical Reports Server (NTRS)

    Cameron, K.

    1991-01-01

    A design technique for microprocessors combining the simplicity of reduced instruction set computers (RISC's) with the richer instruction sets of complex instruction set computers (CISC's) is presented. They utilize the pipelined instruction decode and datapaths common to RISC's. Instruction invariant data processing sequences which transparently support complex addressing modes permit the formulation of simple control circuitry. Compact implementations are possible since neither complicated controllers nor large register sets are required.

  1. Data Selection for Within-Class Covariance Estimation

    DTIC Science & Technology

    2016-09-08

    NIST evaluations to train the within- class and across-class covariance matrices required by these techniques, little attention has been paid to the...multiple utterances from a large population of speakers. Fortunately, participants in NIST evaluations have access to a repository of legacy data from...utterances chosen from previous NIST evaluations. Training data for the UBM and T-matrix was obtained from the NIST Switchboard 2 phases 2-5 and

  2. Unfurlable satellite antennas - A review

    NASA Technical Reports Server (NTRS)

    Roederer, Antoine G.; Rahmat-Samii, Yahia

    1989-01-01

    A review of unfurlable satellite antennas is presented. Typical application requirements for future space missions are first outlined. Then, U.S. and European mesh and inflatable antenna concepts are described. Precision deployables using rigid panels or petals are not included in the survey. RF modeling and performance analysis of gored or faceted mesh reflector antennas are then reviewed. Finally, both on-ground and in-orbit RF test techniques for large unfurlable antennas are discussed.

  3. Process-Based Expansion and Neural Differentiation of Human Pluripotent Stem Cells for Transplantation and Disease Modeling

    PubMed Central

    Stover, Alexander E.; Brick, David J.; Nethercott, Hubert E.; Banuelos, Maria G.; Sun, Lei; O’Dowd, Diane K.; Schwartz, Philip H.

    2014-01-01

    Robust strategies for developing patient-specific, human, induced pluripotent stem cell (iPSC)-based therapies of the brain require an ability to derive large numbers of highly defined neural cells. Recent progress in iPSC culture techniques includes partial-to-complete elimination of feeder layers, use of defined media, and single-cell passaging. However, these techniques still require embryoid body formation or coculture for differentiation into neural stem cells (NSCs). In addition, none of the published methodologies has employed all of the advances in a single culture system. Here we describe a reliable method for long-term, single-cell passaging of PSCs using a feeder-free, defined culture system that produces confluent, adherent PSCs that can be differentiated into NSCs. To provide a basis for robust quality control, we have devised a system of cellular nomenclature that describes an accurate genotype and phenotype of the cells at specific stages in the process. We demonstrate that this protocol allows for the efficient, large-scale, cGMP-compliant production of transplantable NSCs from all lines tested. We also show that NSCs generated from iPSCs produced with the process described are capable of forming both glia defined by their expression of S100β and neurons that fire repetitive action potentials. PMID:23893392

  4. Novel diamond cells for neutron diffraction using multi-carat CVD anvils.

    PubMed

    Boehler, R; Molaison, J J; Haberl, B

    2017-08-01

    Traditionally, neutron diffraction at high pressure has been severely limited in pressure because low neutron flux required large sample volumes and therefore large volume presses. At the high-flux Spallation Neutron Source at the Oak Ridge National Laboratory, we have developed new, large-volume diamond anvil cells for neutron diffraction. The main features of these cells are multi-carat, single crystal chemical vapor deposition diamonds, very large diffraction apertures, and gas membranes to accommodate pressure stability, especially upon cooling. A new cell has been tested for diffraction up to 40 GPa with an unprecedented sample volume of ∼0.15 mm 3 . High quality spectra were obtained in 1 h for crystalline Ni and in ∼8 h for disordered glassy carbon. These new techniques will open the way for routine megabar neutron diffraction experiments.

  5. Mining large heterogeneous data sets in drug discovery.

    PubMed

    Wild, David J

    2009-10-01

    Increasingly, effective drug discovery involves the searching and data mining of large volumes of information from many sources covering the domains of chemistry, biology and pharmacology amongst others. This has led to a proliferation of databases and data sources relevant to drug discovery. This paper provides a review of the publicly-available large-scale databases relevant to drug discovery, describes the kinds of data mining approaches that can be applied to them and discusses recent work in integrative data mining that looks for associations that pan multiple sources, including the use of Semantic Web techniques. The future of mining large data sets for drug discovery requires intelligent, semantic aggregation of information from all of the data sources described in this review, along with the application of advanced methods such as intelligent agents and inference engines in client applications.

  6. Advanced Automation for Ion Trap Mass Spectrometry-New Opportunities for Real-Time Autonomous Analysis

    NASA Technical Reports Server (NTRS)

    Palmer, Peter T.; Wong, C. M.; Salmonson, J. D.; Yost, R. A.; Griffin, T. P.; Yates, N. A.; Lawless, James G. (Technical Monitor)

    1994-01-01

    The utility of MS/MS for both target compound analysis and the structure elucidation of unknowns has been described in a number of references. A broader acceptance of this technique has not yet been realized as it requires large, complex, and costly instrumentation which has not been competitive with more conventional techniques. Recent advancements in ion trap mass spectrometry promise to change this situation. Although the ion trap's small size, sensitivity, and ability to perform multiple stages of mass spectrometry have made it eminently suitable for on-line, real-time monitoring applications, advance automation techniques are required to make these capabilities more accessible to non-experts. Towards this end we have developed custom software for the design and implementation of MS/MS experiments. This software allows the user to take full advantage of the ion trap's versatility with respect to ionization techniques, scan proxies, and ion accumulation/ejection methods. Additionally, expert system software has been developed for autonomous target compound analysis. This software has been linked to ion trap control software and a commercial data system to bring all of the steps in the analysis cycle under control of the expert system. These software development efforts and their utilization for a number of trace analysis applications will be described.

  7. A new communications technique for the nonvocal person, using the Apple II Computer.

    PubMed

    Seamone, W

    1982-01-01

    The purpose of this paper is to describe a technique for nonvocal personal communication for the severely handicapped person, using the Apple II computer system and standard commercially available software diskettes (Visi-Calc). The user's input in a pseudo-Morse code is generated via minute chin motions or limited finger motions applied to a suitable configured two-switch device, and input via the JHU/APL Morse code interface card. The commands and features of the program's row-column matrix, originally intended and widely used for financial management, are used here to call up and modify a large array of stored sentences which can be useful in personal communication. It is not known at this time if the system is in fact cost-effective for the sole purpose of nonvocal communication, since system tradeoff studies have not been made relative to other techniques. However, in some instances an Apple computer may be already available for other purposes at the institution or in the home, and the system described could simply be another utilization of that personal computer. In any case, the system clearly does not meet the requirement of portability. No special components (except for the JHU/APL Morse interface card) and no special programming experience are required to duplicate the communications technique described.

  8. The analytical representation of viscoelastic material properties using optimization techniques

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1993-01-01

    This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.

  9. Evaluation of ultra-low background materials for uranium and thorium using ICP-MS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoppe, E. W.; Overman, N. R.; LaFerriere, B. D.

    2013-08-08

    An increasing number of physics experiments require low background materials for their construction. The presence of Uranium and Thorium and their progeny in these materials present a variety of unwanted background sources for these experiments. The sensitivity of the experiments continues to drive the necessary levels of detection ever lower as well. This requirement for greater sensitivity has rendered direct radioassay impractical in many cases requiring large quantities of material, frequently many kilograms, and prolonged counting times, often months. Other assay techniques have been employed such as Neutron Activation Analysis but this requires access to expensive facilities and instrumentation andmore » can be further complicated and delayed by the formation of unwanted radionuclides. Inductively Coupled Plasma Mass Spectrometry (ICP-MS) is a useful tool and recent advancements have increased the sensitivity particularly in the elemental high mass range of U and Th. Unlike direct radioassay, ICP-MS is a destructive technique since it requires the sample to be in liquid form which is aspirated into a high temperature plasma. But it benefits in that it usually requires a very small sample, typically about a gram. This paper discusses how a variety of low background materials such as copper, polymers, and fused silica are made amenable to ICP-MS assay and how the arduous task of maintaining low backgrounds of U and Th is achieved.« less

  10. Evaluation of Ultra-Low Background Materials for Uranium and Thorium Using ICP-MS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoppe, Eric W.; Overman, Nicole R.; LaFerriere, Brian D.

    2013-08-08

    An increasing number of physics experiments require low background materials for their construction. The presence of Uranium and Thorium and their progeny in these materials present a variety of unwanted background sources for these experiments. The sensitivity of the experiments continues to drive the necessary levels of detection ever lower as well. This requirement for greater sensitivity has rendered direct radioassay impractical in many cases requiring large quantities of material, frequently many kilograms, and prolonged counting times, often months. Other assay techniques have been employed such as Neutron Activation Analysis but this requires access to expensive facilities and instrumentation andmore » can be further complicated and delayed by the formation of unwanted radionuclides. Inductively Coupled Plasma Mass Spectrometry (ICP-MS) is a useful tool and recent advancements have increased the sensitivity particularly in the elemental high mass range of U and Th. Unlike direct radioassay, ICP-MS is a destructive technique since it requires the sample to be in liquid form which is aspirated into a high temperature plasma. But it benefits in that it usually requires a very small sample, typically about a gram. Here we will discuss how a variety of low background materials such as copper, polymers, and fused silica are made amenable to ICP-MS assay and how the arduous task of maintaining low backgrounds of U and Th is achieved.« less

  11. Results of microsurgical treatment of large and giant ICA aneurysms using the retrograde suction decompression (RSD) technique: series of 92 patients.

    PubMed

    Eliava, Shalva S; Filatov, Yuri M; Yakovlev, Sergei B; Shekhtman, Oleg D; Kheireddin, Ali S; Sazonov, Ilya A; Sazonova, Olga B; Okishev, Dmitry N

    2010-06-01

    Microsurgical treatment of large and giant paraclinoid internal carotid artery (ICA) aneurysms often requires the use of the retrograde suction decompression (RSD) technique to facilitate clipping. Surgical results, functional outcomes at discharge, and technique limitations based on single institution series are presented. Between 1996 and 2009, eighty-three consecutive patients (19 to 68 years, mean 45.5 ± 9.9 years), predominantly women (69 women and 14 men) with large (23 patients, 27.7%) or giant (60 patients, 72.3%) paraclinoid aneurysms were surgically treated with the RSD technique performed by the neck route (62 patients, 74.4%) or later on, by endovascular means (21 patients, 25.3%). Patients were admitted after hemorrhage (48 patients, 57.9%), pseudotumor course (28 patients, 33.7%), mixed symptoms (5 patients, 6%), or asymptomatic (2 patients, 2.4%). In most RSD surgeries (90.4%) aneurysms were successfully excluded: neck was clipped in 57 patients (68.7%) or clipping with ICA reconstruction was achieved in 18 patients (21.7%). In six patients aneurysms were wrapped with glue (7.2%), trapped in one patient (1.2%), and in one patient, ICA balloon deconstruction was performed (1.2%). Good or excellent results (Glasgow Outcome Scale scores 4-5) at discharge were achieved in 69 patients (83.1%), 11 patients (13.3%) remained severely disabled (Glasgow Outcome Scale 3), and 3 patients died (3.6%). Surgical clipping with the RSD method remains a treatment of choice with acceptable outcomes for patients not amenable for endovascular treatment. Copyright © 2010 Elsevier Inc. All rights reserved.

  12. Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?

    NASA Technical Reports Server (NTRS)

    Lum, Karen; Hihn, Jairus; Menzies, Tim

    2006-01-01

    While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.

  13. A quantitative image cytometry technique for time series or population analyses of signaling networks.

    PubMed

    Ozaki, Yu-ichi; Uda, Shinsuke; Saito, Takeshi H; Chung, Jaehoon; Kubota, Hiroyuki; Kuroda, Shinya

    2010-04-01

    Modeling of cellular functions on the basis of experimental observation is increasingly common in the field of cellular signaling. However, such modeling requires a large amount of quantitative data of signaling events with high spatio-temporal resolution. A novel technique which allows us to obtain such data is needed for systems biology of cellular signaling. We developed a fully automatable assay technique, termed quantitative image cytometry (QIC), which integrates a quantitative immunostaining technique and a high precision image-processing algorithm for cell identification. With the aid of an automated sample preparation system, this device can quantify protein expression, phosphorylation and localization with subcellular resolution at one-minute intervals. The signaling activities quantified by the assay system showed good correlation with, as well as comparable reproducibility to, western blot analysis. Taking advantage of the high spatio-temporal resolution, we investigated the signaling dynamics of the ERK pathway in PC12 cells. The QIC technique appears as a highly quantitative and versatile technique, which can be a convenient replacement for the most conventional techniques including western blot, flow cytometry and live cell imaging. Thus, the QIC technique can be a powerful tool for investigating the systems biology of cellular signaling.

  14. Novel Multiplexing Technique for Detector and Mixer Arrays

    NASA Technical Reports Server (NTRS)

    Karasik, Boris S.; McGrath, William R.

    2001-01-01

    Future submillimeter and far-infrared space telescopes will require large-format (many 1000's of elements) imaging detector arrays to perform state-of-the-art astronomical observations. A crucial issue related to a focal plane array is a readout scheme which is compatible with large numbers of cryogenically-cooled (typically < 1 K) detectors elements. When the number of elements becomes of the order of thousands, the physical layout for individual readout amplifiers becomes nearly impossible to realize for practical systems. Another important concern is the large number of wires leading to a 0.1-0.3 K platform. In the case of superconducting transition edge sensors (TES), a scheme for time-division multiplexing of SQUID read-out amplifiers has been recently demonstrated. In this scheme the number of SQUIDs is equal to the number (N) of the detectors, but only one SQUID is turned on at a time. The SQUIDs are connected in series in each column of the array, so the number of wires leading to the amplifiers can be reduced, but it is still of the order of N. Another approach uses a frequency domain multiplexing scheme of the bolometer array. The bolometers are biased with ac currents whose frequencies are individual for each element and are much higher than the bolometer bandwidth. The output signals are connected in series in a summing loop which is coupled to a single SQUID amplifier. The total number of channels depends on the ratio between the SQUID bandwidth and the bolometer bandwidth and can be at least 100 according to the authors. An important concern about this technique is a contribution of the out-of-band Johnson noise which multiplies by factor N(exp 1/2) for each frequency channel. We propose a novel solution for large format arrays based on the Hadamard transform coding technique which requires only one amplifier to read out the entire array of potentially many 1000's of elements and uses approximately 10 wires between the cold stage and room temperature electronics. This can significantly reduce the complexity of the readout circuits.

  15. Integration of Mirror Design with Suspension System using NASA's New Mirror Modeling Software

    NASA Technical Reports Server (NTRS)

    Arnold, William; Bevan Ryan M.; Stahl, Philip

    2013-01-01

    Advances in mirror fabrication is making very large space based telescopes possible. In the many applications, only monolithic mirrors meet the performance requirements. The existing and near-term planned heavy launch vehicles place a premium on lowest possible mass. Again, available and planned payload shroud size limits near term designs to 4 meter class mirror. Practical 8 meter and beyond designs could encourage planners to include larger shrouds if it can be proven that such mirrors can be manufactured. These two factors lower mass and larger mirrors, presents the classic optimization problem. There is a practical upper limit to how large a mirror can be supported by a purely kinematic mount system and be launched. This paper shows how the design of the suspension system and mirror blank needs to be designed simultaneously. We will also explore the concepts of auxiliary support systems, which act only during launch and disengage on orbit. We will define required characteristics of these systems and show how they can substantially reduce the mirror mass. The AMTD project is developing and maturing the processes for future replacements for HUBBLE, creating the design tools, validating the methods and techniques necessary to manufacture, test and launch extremely large optical missions. This paper will use the AMTD 4 meter "design point" as an illustration of the typical use of the modeler in generating the multiple models of mirror and suspension systems used during the conceptual design phase of most projects. The influence of Hexapod geometry, mirror depth, cell size and construction techniques (Exelsis Deep Core Low Temperature Fusion (c) versus Corning Frit Bonded (c) versus Schott Pocket Milled Zerodur (c) in this particular study) are being evaluated. Due to space and time consideration we will only be able to present snippets of the study in this paper. The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond.

  16. Treatment of Wide-Neck Bifurcation Aneurysm Using "WEB Device Waffle Cone Technique".

    PubMed

    Mihalea, Cristian; Caroff, Jildaz; Rouchaud, Aymeric; Pescariu, Sorin; Moret, Jacques; Spelle, Laurent

    2018-05-01

    The endovascular treatment of wide-neck bifurcation aneurysms can be challenging and often requires the use of adjunctive techniques and devices. We report our first experience of using a waffle-cone technique adapted to the Woven Endoluminal Bridge (WEB) device in a large-neck basilar tip aneurysm, suitable in cases where the use of Y stenting or other techniques is limited due to anatomic restrictions. The procedure was complete, and angiographic occlusion of the aneurysm was achieved 24 hours post treatment, as confirmed by digital subtraction angiography. No complications occurred. The case reported here was not suitable for Y stenting or deployment of the WEB device alone, due to the small caliber of both posterior cerebral arteries and their origin at the neck level. The main advantage of this technique is that both devices have a controlled detachment system and are fully independent. To our knowledge, this technique has not been reported previously and this modality of treatment has never been described in the literature. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Evaluation of Clipping Based Iterative PAPR Reduction Techniques for FBMC Systems

    PubMed Central

    Kollár, Zsolt

    2014-01-01

    This paper investigates filter bankmulticarrier (FBMC), a multicarrier modulation technique exhibiting an extremely low adjacent channel leakage ratio (ACLR) compared to conventional orthogonal frequency division multiplexing (OFDM) technique. The low ACLR of the transmitted FBMC signal makes it especially favorable in cognitive radio applications, where strict requirements are posed on out-of-band radiation. Large dynamic range resulting in high peak-to-average power ratio (PAPR) is characteristic of all sorts of multicarrier signals. The advantageous spectral properties of the high-PAPR FBMC signal are significantly degraded if nonlinearities are present in the transceiver chain. Spectral regrowth may appear, causing harmful interference in the neighboring frequency bands. This paper presents novel clipping based PAPR reduction techniques, evaluated and compared by simulations and measurements, with an emphasis on spectral aspects. The paper gives an overall comparison of PAPR reduction techniques, focusing on the reduction of the dynamic range of FBMC signals without increasing out-of-band radiation. An overview is presented on transmitter oriented techniques employing baseband clipping, which can maintain the system performance with a desired bit error rate (BER). PMID:24558338

  18. Development of transition edge sensors with rf-SQUID based multiplexing system for the HOLMES experiment

    NASA Astrophysics Data System (ADS)

    Puiu, A.; Becker, D.; Bennett, D.; Faverzani, M.; Ferri, E.; Fowler, J.; Gard, J.; Hays-Wehle, J.; Hilton, G.; Giachero, A.; Maino, M.; Mates, J.; Nucciotti, A.; Schmidt, D.; Swetz, D.; Ullom, J.; Vale, L.

    2017-09-01

    Measuring the neutrino mass is one the most compelling issue in particle physics. HOLMES is an experiment funded by the European Research Council for a direct measurement of neutrino mass. HOLMES will perform a precise measurement of the end point of the Electron Capture decay spectrum of 163Ho in order to extract information on neutrino mass with a sensitivity as low as 1 eV. HOLMES, in its final configuration will deploy a 1000 pixel array of low temperature microcalorimeters: each calorimeter consists of an absorber, where the Ho atoms will be implanted, coupled to a Transition Edge Sensor thermometer. The detectors will be kept at the working temperature of ˜70 mK using a dilution refrigerator. In order to gather the required 3 × 1013 events in a three year long data taking with a pile up fraction as low as 10-4, detectors must fulfill rather high speed and resolution requirements, i.e. 10 µs rise time and 4 eV resolution. To ensure such performances with an efficient read out technique for very large detectors array kept at low temperature inside a cryostat is no trivial matter: at the moment, the most appealing read out technique applicable to large arrays of Transition Edge Sensors is rf-SQUID multiplexing. It is based on the use of rf-SQUIDs as input devices with flux ramp modulation for linearisation purposes; the rf-SQUID is then coupled to a super-conductive λ/4-wave resonator in the GHz range, and the modulated signal is finally read out using the homodyne technique.

  19. Active edge control in the precessions polishing process for manufacturing large mirror segments

    NASA Astrophysics Data System (ADS)

    Li, Hongyu; Zhang, Wei; Walker, David; Yu, Gouyo

    2014-09-01

    The segmentation of the primary mirror is the only promising solution for building the next generation of ground telescopes. However, manufacturing segmented mirrors presents its own challenges. The edge mis-figure impacts directly on the telescope's scientific output. The `Edge effect' significantly dominates the polishing precision. Therefore, the edge control is regarded as one of the most difficult technical issues in the segment production that needs to be addressed urgently. This paper reports an active edge control technique for the mirror segments fabrication using the Precession's polishing technique. The strategy in this technique requires that the large spot be selected on the bulk area for fast polishing, and the small spot is used for edge figuring. This can be performed by tool lift and optimizing the dell time to compensate for non-uniform material removal at the edge zone. This requires accurate and stable edge tool influence functions. To obtain the full tool influence function at the edge, we have demonstrated in previous work a novel hybrid-measurement method which uses both simultaneous phase interferometry and profilometry. In this paper, the edge effect under `Bonnet tool' polishing is investigated. The pressure distribution is analyzed by means of finite element analysis (FEA). According to the `Preston' equation, the shape of the edge tool influence functions is predicted. With this help, the multiple process parameters at the edge zone are optimized. This is demonstrated on a 200mm crosscorners hexagonal part with a result of PV less than 200nm for entire surface.

  20. Complexity Optimization and High-Throughput Low-Latency Hardware Implementation of a Multi-Electrode Spike-Sorting Algorithm

    PubMed Central

    Dragas, Jelena; Jäckel, David; Hierlemann, Andreas; Franke, Felix

    2017-01-01

    Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction. PMID:25415989

  1. Complexity optimization and high-throughput low-latency hardware implementation of a multi-electrode spike-sorting algorithm.

    PubMed

    Dragas, Jelena; Jackel, David; Hierlemann, Andreas; Franke, Felix

    2015-03-01

    Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction.

  2. Aluminum Mirror Coatings for UVOIR Telescope Optics Including the Far UV

    NASA Technical Reports Server (NTRS)

    Balasubramanian, Kunjithapatha; Hennessy, John; Raouf, Nasrat; Nikzad, Shouleh; Ayala, Michael; Shaklan, Stuart; Scowen, Paul; Del Hoyo, Javier; Quijada, Manuel

    2015-01-01

    NASA Cosmic Origins (COR) Program identified the development of high reflectivity mirror coatings for large astronomical telescopes particularly for the far ultra violet (FUV) part of the spectrum as a key technology requiring significant materials research and process development. In this paper we describe the challenges and accomplishments in producing stable high reflectance aluminum mirror coatings with conventional evaporation and advanced Atomic Layer Deposition (ALD) techniques. We present the current status of process development with reflectance of approx. 55 to 80% in the FUV achieved with little or no degradation over a year. Keywords: Large telescope optics, Aluminum mirror, far UV astrophysics, ALD, coating technology development.

  3. NASA Out-of-Autoclave Process Technology Development

    NASA Technical Reports Server (NTRS)

    Johnston, Norman, J.; Clinton, R. G., Jr.; McMahon, William M.

    2000-01-01

    Polymer matrix composites (PMCS) will play a significant role in the construction of large reusable launch vehicles (RLVs), mankind's future major access to low earth orbit and the international space station. PMCs are lightweight and offer attractive economies of scale and automated fabrication methodology. Fabrication of large RLV structures will require non-autoclave methods which have yet to be matured including (1) thermoplastic forming: heated head robotic tape placement, sheet extrusion, pultrusion, molding and forming; (2) electron beam curing: bulk and ply-by-ply automated placement; (3) RTM and VARTM. Research sponsored by NASA in industrial and NASA laboratories on automated placement techniques involving the first 2 categories will be presented.

  4. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  5. Protein Folding Using a Vortex Fluidic Device.

    PubMed

    Britton, Joshua; Smith, Joshua N; Raston, Colin L; Weiss, Gregory A

    2017-01-01

    Essentially all biochemistry and most molecular biology experiments require recombinant proteins. However, large, hydrophobic proteins typically aggregate into insoluble and misfolded species, and are directed into inclusion bodies. Current techniques to fold proteins recovered from inclusion bodies rely on denaturation followed by dialysis or rapid dilution. Such approaches can be time consuming, wasteful, and inefficient. Here, we describe rapid protein folding using a vortex fluidic device (VFD). This process uses mechanical energy introduced into thin films to rapidly and efficiently fold proteins. With the VFD in continuous flow mode, large volumes of protein solution can be processed per day with 100-fold reductions in both folding times and buffer volumes.

  6. Aerodynamic characteristics at high angles of attack

    NASA Technical Reports Server (NTRS)

    Chambers, J. R.

    1977-01-01

    An overview is presented of the aerodynamic inputs required for analysis of flight dynamics in the high-angle-of-attack regime wherein large-disturbance, nonlinear effects predominate. An outline of the presentation is presented. The discussion includes: (1) some important fundamental phenomena which determine to a large extent the aerodynamic characteristics of airplanes at high angles of attack; (2) static and dynamic aerodynamic characteristics near the stall; (3) aerodynamics of the spin; (4) test techniques used in stall/spin studies; (5) applications of aerodynamic data to problems in flight dynamics in the stall/spin area; and (6) the outlook for future research in the area.

  7. Efficient development and processing of thermal math models of very large space truss structures

    NASA Technical Reports Server (NTRS)

    Warren, Andrew H.; Arelt, Joseph E.; Lalicata, Anthony L.

    1993-01-01

    As the spacecraft moves along the orbit, the truss members are subjected to direct and reflected solar, albedo and planetary infra-red (IR) heating rates, as well as IR heating and shadowing from other spacecraft components. This is a transient process with continuously changing heating loads and the shadowing effects. The resulting nonuniform temperature distribution may cause nonuniform thermal expansion, deflection and stress in the truss elements, truss warping and thermal distortions. There are three challenges in the thermal-structural analysis of the large truss structures. The first is the development of the thermal and structural math models, the second - model processing, and the third - the data transfer between the models. All three tasks require considerable time and computer resources to be done because of a very large number of components involved. To address these challenges a series of techniques of automated thermal math modeling and efficient processing of very large space truss structures were developed. In the process the finite element and finite difference methods are interfaced. A very substantial reduction of the quantity of computations was achieved while assuring a desired accuracy of the results. The techniques are illustrated on the thermal analysis of a segment of the Space Station main truss.

  8. Pharmaceutical Raw Material Identification Using Miniature Near-Infrared (MicroNIR) Spectroscopy and Supervised Pattern Recognition Using Support Vector Machine

    PubMed Central

    Hsiung, Chang; Pederson, Christopher G.; Zou, Peng; Smith, Valton; von Gunten, Marc; O’Brien, Nada A.

    2016-01-01

    Near-infrared spectroscopy as a rapid and non-destructive analytical technique offers great advantages for pharmaceutical raw material identification (RMID) to fulfill the quality and safety requirements in pharmaceutical industry. In this study, we demonstrated the use of portable miniature near-infrared (MicroNIR) spectrometers for NIR-based pharmaceutical RMID and solved two challenges in this area, model transferability and large-scale classification, with the aid of support vector machine (SVM) modeling. We used a set of 19 pharmaceutical compounds including various active pharmaceutical ingredients (APIs) and excipients and six MicroNIR spectrometers to test model transferability. For the test of large-scale classification, we used another set of 253 pharmaceutical compounds comprised of both chemically and physically different APIs and excipients. We compared SVM with conventional chemometric modeling techniques, including soft independent modeling of class analogy, partial least squares discriminant analysis, linear discriminant analysis, and quadratic discriminant analysis. Support vector machine modeling using a linear kernel, especially when combined with a hierarchical scheme, exhibited excellent performance in both model transferability and large-scale classification. Hence, ultra-compact, portable and robust MicroNIR spectrometers coupled with SVM modeling can make on-site and in situ pharmaceutical RMID for large-volume applications highly achievable. PMID:27029624

  9. Captured metagenomics: large-scale targeting of genes based on ‘sequence capture’ reveals functional diversity in soils

    PubMed Central

    Manoharan, Lokeshwaran; Kushwaha, Sandeep K.; Hedlund, Katarina; Ahrén, Dag

    2015-01-01

    Microbial enzyme diversity is a key to understand many ecosystem processes. Whole metagenome sequencing (WMG) obtains information on functional genes, but it is costly and inefficient due to large amount of sequencing that is required. In this study, we have applied a captured metagenomics technique for functional genes in soil microorganisms, as an alternative to WMG. Large-scale targeting of functional genes, coding for enzymes related to organic matter degradation, was applied to two agricultural soil communities through captured metagenomics. Captured metagenomics uses custom-designed, hybridization-based oligonucleotide probes that enrich functional genes of interest in metagenomic libraries where only probe-bound DNA fragments are sequenced. The captured metagenomes were highly enriched with targeted genes while maintaining their target diversity and their taxonomic distribution correlated well with the traditional ribosomal sequencing. The captured metagenomes were highly enriched with genes related to organic matter degradation; at least five times more than similar, publicly available soil WMG projects. This target enrichment technique also preserves the functional representation of the soils, thereby facilitating comparative metagenomics projects. Here, we present the first study that applies the captured metagenomics approach in large scale, and this novel method allows deep investigations of central ecosystem processes by studying functional gene abundances. PMID:26490729

  10. System Learning via Exploratory Data Analysis: Seeing Both the Forest and the Trees

    NASA Astrophysics Data System (ADS)

    Habash Krause, L.

    2014-12-01

    As the amount of observational Earth and Space Science data grows, so does the need for learning and employing data analysis techniques that can extract meaningful information from those data. Space-based and ground-based data sources from all over the world are used to inform Earth and Space environment models. However, with such a large amount of data comes a need to organize those data in a way such that trends within the data are easily discernible. This can be tricky due to the interaction between physical processes that lead to partial correlation of variables or multiple interacting sources of causality. With the suite of Exploratory Data Analysis (EDA) data mining codes available at MSFC, we have the capability to analyze large, complex data sets and quantitatively identify fundamentally independent effects from consequential or derived effects. We have used these techniques to examine the accuracy of ionospheric climate models with respect to trends in ionospheric parameters and space weather effects. In particular, these codes have been used to 1) Provide summary "at-a-glance" surveys of large data sets through categorization and/or evolution over time to identify trends, distribution shapes, and outliers, 2) Discern the underlying "latent" variables which share common sources of causality, and 3) Establish a new set of basis vectors by computing Empirical Orthogonal Functions (EOFs) which represent the maximum amount of variance for each principal component. Some of these techniques are easily implemented in the classroom using standard MATLAB functions, some of the more advanced applications require the statistical toolbox, and applications to unique situations require more sophisiticated levels of programming. This paper will present an overview of the range of tools available and how they might be used for a variety of time series Earth and Space Science data sets. Examples of feature recognition from both 1D and 2D (e.g. imagery) time series data sets will be presented.

  11. Large-scale tomographic particle image velocimetry using helium-filled soap bubbles

    NASA Astrophysics Data System (ADS)

    Kühn, Matthias; Ehrenfried, Klaus; Bosbach, Johannes; Wagner, Claus

    2011-04-01

    To measure large-scale flow structures in air, a tomographic particle image velocimetry (tomographic PIV) system for measurement volumes of the order of one cubic metre is developed, which employs helium-filled soap bubbles (HFSBs) as tracer particles. The technique has several specific characteristics compared to most conventional tomographic PIV systems, which are usually applied to small measurement volumes. One of them is spot lights on the HFSB tracers, which slightly change their position, when the direction of observation is altered. Further issues are the large particle to voxel ratio and the short focal length of the used camera lenses, which result in a noticeable variation of the magnification factor in volume depth direction. Taking the specific characteristics of the HFSBs into account, the feasibility of our large-scale tomographic PIV system is demonstrated by showing that the calibration errors can be reduced down to 0.1 pixels as required. Further, an accurate and fast implementation of the multiplicative algebraic reconstruction technique, which calculates the weighting coefficients when needed instead of storing them, is discussed. The tomographic PIV system is applied to measure forced convection in a convection cell at a Reynolds number of 530 based on the inlet channel height and the mean inlet velocity. The size of the measurement volume and the interrogation volumes amount to 750 mm × 450 mm × 165 mm and 48 mm × 48 mm × 24 mm, respectively. Validation of the tomographic PIV technique employing HFSBs is further provided by comparing profiles of the mean velocity and of the root mean square velocity fluctuations to respective planar PIV data.

  12. Fast Algorithms for Designing Unimodular Waveform(s) With Good Correlation Properties

    NASA Astrophysics Data System (ADS)

    Li, Yongzhe; Vorobyov, Sergiy A.

    2018-03-01

    In this paper, we develop new fast and efficient algorithms for designing single/multiple unimodular waveforms/codes with good auto- and cross-correlation or weighted correlation properties, which are highly desired in radar and communication systems. The waveform design is based on the minimization of the integrated sidelobe level (ISL) and weighted ISL (WISL) of waveforms. As the corresponding optimization problems can quickly grow to large scale with increasing the code length and number of waveforms, the main issue turns to be the development of fast large-scale optimization techniques. The difficulty is also that the corresponding optimization problems are non-convex, but the required accuracy is high. Therefore, we formulate the ISL and WISL minimization problems as non-convex quartic optimization problems in frequency domain, and then simplify them into quadratic problems by utilizing the majorization-minimization technique, which is one of the basic techniques for addressing large-scale and/or non-convex optimization problems. While designing our fast algorithms, we find out and use inherent algebraic structures in the objective functions to rewrite them into quartic forms, and in the case of WISL minimization, to derive additionally an alternative quartic form which allows to apply the quartic-quadratic transformation. Our algorithms are applicable to large-scale unimodular waveform design problems as they are proved to have lower or comparable computational burden (analyzed theoretically) and faster convergence speed (confirmed by comprehensive simulations) than the state-of-the-art algorithms. In addition, the waveforms designed by our algorithms demonstrate better correlation properties compared to their counterparts.

  13. Managing Large Scale Project Analysis Teams through a Web Accessible Database

    NASA Technical Reports Server (NTRS)

    O'Neil, Daniel A.

    2008-01-01

    Large scale space programs analyze thousands of requirements while mitigating safety, performance, schedule, and cost risks. These efforts involve a variety of roles with interdependent use cases and goals. For example, study managers and facilitators identify ground-rules and assumptions for a collection of studies required for a program or project milestone. Task leaders derive product requirements from the ground rules and assumptions and describe activities to produce needed analytical products. Disciplined specialists produce the specified products and load results into a file management system. Organizational and project managers provide the personnel and funds to conduct the tasks. Each role has responsibilities to establish information linkages and provide status reports to management. Projects conduct design and analysis cycles to refine designs to meet the requirements and implement risk mitigation plans. At the program level, integrated design and analysis cycles studies are conducted to eliminate every 'to-be-determined' and develop plans to mitigate every risk. At the agency level, strategic studies analyze different approaches to exploration architectures and campaigns. This paper describes a web-accessible database developed by NASA to coordinate and manage tasks at three organizational levels. Other topics in this paper cover integration technologies and techniques for process modeling and enterprise architectures.

  14. Design Optimization of a Variable-Speed Power Turbine

    NASA Technical Reports Server (NTRS)

    Hendricks, Eric S.; Jones, Scott M.; Gray, Justin S.

    2014-01-01

    NASA's Rotary Wing Project is investigating technologies that will enable the development of revolutionary civil tilt rotor aircraft. Previous studies have shown that for large tilt rotor aircraft to be viable, the rotor speeds need to be slowed significantly during the cruise portion of the flight. This requirement to slow the rotors during cruise presents an interesting challenge to the propulsion system designer as efficient engine performance must be achieved at two drastically different operating conditions. One potential solution to this challenge is to use a transmission with multiple gear ratios and shift to the appropriate ratio during flight. This solution will require a large transmission that is likely to be maintenance intensive and will require a complex shifting procedure to maintain power to the rotors at all times. An alternative solution is to use a fixed gear ratio transmission and require the power turbine to operate efficiently over the entire speed range. This concept is referred to as a variable-speed power-turbine (VSPT) and is the focus of the current study. This paper explores the design of a variable speed power turbine for civil tilt rotor applications using design optimization techniques applied to NASA's new meanline tool, the Object-Oriented Turbomachinery Analysis Code (OTAC).

  15. Laser-induced dissociation processes of protonated glucose: dehydration reactions vs cross-ring dissociation

    NASA Astrophysics Data System (ADS)

    Dyakov, Y. A.; Kazaryan, M. A.; Golubkov, M. G.; Gubanova, D. P.; Bulychev, N. A.; Kazaryan, S. M.

    2018-04-01

    Studying the processes occurring in biological systems under irradiation is critically important for understanding the principles of working of biological systems. One of the main problems, which stimulate interest to the processes of photo-induced excitation and ionization of biomolecules, is the necessity of their identification by various mass spectrometry (MS) methods. While simple analysis of small molecules became a standard MS technique long time ago, recognition of large molecules, especially carbohydrates, is still a difficult problem, and requires sophisticated techniques and complicated computer analysis. Due to the large variety of substances in the samples, as far as the complexity of the processes occurring after excitation/ionization of the molecules, the recognition efficiency of MS technique in terms of carbohydrates is still not high enough. Additional theoretical and experimental analysis of ionization and dissociation processes in various kinds of polysaccharides, beginning from the simplest ones, is necessary. In our work, we extent previous theoretical and experimental studies of saccharides, and concentrate our attention to protonated glucose. In this article we paid the most attention to the cross-ring dissociation and water loss reactions due to their importance for identification of various isomers of hydrocarbon molecules (for example, distinguish α- and β-glucose).

  16. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    NASA Astrophysics Data System (ADS)

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.

  17. Using unmanned aerial vehicle (UAV) surveys and image analysis in the study of large surface-associated marine species: a case study on reef sharks Carcharhinus melanopterus shoaling behaviour.

    PubMed

    Rieucau, G; Kiszka, J J; Castillo, J C; Mourier, J; Boswell, K M; Heithaus, M R

    2018-06-01

    A novel image analysis-based technique applied to unmanned aerial vehicle (UAV) survey data is described to detect and locate individual free-ranging sharks within aggregations. The method allows rapid collection of data and quantification of fine-scale swimming and collective patterns of sharks. We demonstrate the usefulness of this technique in a small-scale case study exploring the shoaling tendencies of blacktip reef sharks Carcharhinus melanopterus in a large lagoon within Moorea, French Polynesia. Using our approach, we found that C. melanopterus displayed increased alignment with shoal companions when distributed over a sandflat where they are regularly fed for ecotourism purposes as compared with when they shoaled in a deeper adjacent channel. Our case study highlights the potential of a relatively low-cost method that combines UAV survey data and image analysis to detect differences in shoaling patterns of free-ranging sharks in shallow habitats. This approach offers an alternative to current techniques commonly used in controlled settings that require time-consuming post-processing effort. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  18. Development testing of large volume water sprays for warm fog dispersal

    NASA Technical Reports Server (NTRS)

    Keller, V. W.; Anderson, B. J.; Burns, R. A.; Lala, G. G.; Meyer, M. B.; Beard, K. V.

    1986-01-01

    A new brute-force method of warm fog dispersal is described. The method uses large volume recycled water sprays to create curtains of falling drops through which the fog is processed by the ambient wind and spray induced air flow. Fog droplets are removed by coalescence/rainout. The efficiency of the technique depends upon the drop size spectra in the spray, the height to which the spray can be projected, the efficiency with which fog laden air is processed through the curtain of spray, and the rate at which new fog may be formed due to temperature differences between the air and spray water. Results of a field test program, implemented to develop the data base necessary to assess the proposed method, are presented. Analytical calculations based upon the field test results indicate that this proposed method of warm fog dispersal is feasible. Even more convincingly, the technique was successfully demonstrated in the one natural fog event which occurred during the test program. Energy requirements for this technique are an order of magnitude less than those to operate a thermokinetic system. An important side benefit is the considerable emergency fire extinguishing capability it provides along the runway.

  19. Range-Gated Metrology: An Ultra-Compact Sensor for Dimensional Stabilization

    NASA Technical Reports Server (NTRS)

    Lay, Oliver P.; Dubovitsky, Serge; Shaddock, Daniel A.; Ware, Brent; Woodruff, Christopher S.

    2008-01-01

    Point-to-point laser metrology systems can be used to stabilize large structures at the nanometer levels required for precision optical systems. Existing sensors are large and intrusive, however, with optical heads that consist of several optical elements and require multiple optical fiber connections. The use of point-to-point laser metrology has therefore been limited to applications where only a few gauges are needed and there is sufficient space to accommodate them. Range-Gated Metrology is a signal processing technique that preserves nanometer-level or better performance while enabling: (1) a greatly simplified optical head - a single fiber optic collimator - that can be made very compact, and (2) a single optical fiber connection that is readily multiplexed. This combination of features means that it will be straightforward and cost-effective to embed tens or hundreds of compact metrology gauges to stabilize a large structure. In this paper we describe the concept behind Range-Gated Metrology, demonstrate the performance in a laboratory environment, and give examples of how such a sensor system might be deployed.

  20. Two-photon Lee-Goldburg nuclear magnetic resonance: Simultaneous homonuclear decoupling and signal acquisition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michal, Carl A.; Hastings, Simon P.; Lee, Lik Hang

    2008-02-07

    We present NMR signals from a strongly coupled homonuclear spin system, {sup 1}H nuclei in adamantane, acquired with simultaneous two-photon excitation under conditions of the Lee-Goldburg experiment. Small coils, having inside diameters of 0.36 mm, are used to achieve two-photon nutation frequencies of {approx}20 kHz. The very large rf field strengths required give rise to large Bloch-Siegert shifts that cannot be neglected. These experiments are found to be extremely sensitive to inhomogeneity of the applied rf field, and due to the Bloch-Siegert shift, exhibit a large asymmetry in response between the upper and lower Lee-Goldburg offsets. Two-photon excitation has themore » potential to enhance both the sensitivity and performance of homonuclear dipolar decoupling, but is made challenging by the high rf power required and the difficulties introduced by the inhomogeneous Bloch-Siegert shift. We briefly discuss a variation of the frequency-switched Lee-Goldburg technique, called four-quadrant Lee-Goldburg (4QLG) that produces net precession in the x-y plane, with a reduced chemical shift scaling factor of 1/3.« less

  1. Multi-Intelligence Analytics for Next Generation Analysts (MIAGA)

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Waltz, Ed

    2016-05-01

    Current analysts are inundated with large volumes of data from which extraction, exploitation, and indexing are required. A future need for next-generation analysts is an appropriate balance between machine analytics from raw data and the ability of the user to interact with information through automation. Many quantitative intelligence tools and techniques have been developed which are examined towards matching analyst opportunities with recent technical trends such as big data, access to information, and visualization. The concepts and techniques summarized are derived from discussions with real analysts, documented trends of technical developments, and methods to engage future analysts with multiintelligence services. For example, qualitative techniques should be matched against physical, cognitive, and contextual quantitative analytics for intelligence reporting. Future trends include enabling knowledge search, collaborative situational sharing, and agile support for empirical decision-making and analytical reasoning.

  2. Research and development of metals for medical devices based on clinical needs

    PubMed Central

    Hanawa, Takao

    2012-01-01

    The current research and development of metallic materials used for medicine and dentistry is reviewed. First, the general properties required of metals used in medical devices are summarized, followed by the needs for the development of α + β type Ti alloys with large elongation and β type Ti alloys with a low Young's modulus. In addition, nickel-free Ni–Ti alloys and austenitic stainless steels are described. As new topics, we review metals that are bioabsorbable and compatible with magnetic resonance imaging. Surface treatment and modification techniques to improve biofunctions and biocompatibility are categorized, and the related problems are presented at the end of this review. The metal surface may be biofunctionalized by various techniques, such as dry and wet processes. These techniques make it possible to apply metals to scaffolds in tissue engineering. PMID:27877526

  3. Extended sources near-field processing of experimental aperture synthesis data and application of the Gerchberg method for enhancing radiometric three-dimensional millimetre-wave images in security screening portals

    NASA Astrophysics Data System (ADS)

    Salmon, Neil A.

    2017-10-01

    Aperture synthesis for passive millimetre wave imaging provides a means to screen people for concealed threats in the extreme near-field configuration of a portal, a regime where the imager to subject distance is of the order of both the required depth-of-field and the field-of-view. Due to optical aberrations, focal plane array imagers cannot deliver the large depth-of-fields and field-of-views required in this regime. Active sensors on the other hand can deliver these but face challenges of illumination, speckle and multi-path issues when imaging canyon regions of the body. Fortunately an aperture synthesis passive millimetre wave imaging system can deliver large depth-of-fields and field-of-views, whilst having no speckle effects, as the radiometric emission from the human body is spatially incoherent. Furthermore, as in portal security screening scenarios the aperture synthesis imaging technique delivers a half-wavelength spatial resolution, it can effectively screen the whole of the human body. Some recent measurements are presented that demonstrate the three-dimensional imaging capability of extended sources using a 22 GHz aperture synthesis system. A comparison is made between imagery generated via the analytic Fourier transform and a gridding fast Fourier transform method. The analytic Fourier transform enables aliasing in the imagery to be more clearly identified. Some initial results are also presented of how the Gerchberg technique, an image enhancement algorithm used in radio astronomy, is adapted for three-dimensional imaging in security screening. This technique is shown to be able to improve the quality of imagery, without adding extra receivers to the imager. The requirements of a walk through security screening system for use at entrances to airport departure lounges are discussed, concluding that these can be met by an aperture synthesis imager.

  4. Transverse preputial onlay island flap urethroplasty for single-stage correction of proximal hypospadias.

    PubMed

    Singal, Arbinder Kumar; Dubey, Manish; Jain, Viral

    2016-07-01

    Transverse preputial onlay island flap urethroplasty (TPOIF) was described initially for distal hypospadias, but has seen extended application for proximal hypospadias. We describe a set of modifications in the technique and results in a large series of proximal hypospadias. All children who underwent TPOIF repair for proximal hypospadias (proximal penile, penoscrotal and scrotal) from June 2006 to June 2013 by a single surgeon were prospectively followed till June, 2014. A standard technique and postoperative protocol were followed. Salient points to be emphasized in the technique: (1) dissection of the dartos pedicle till penopubic junction to prevent penile torsion, (2) incorporation of the spongiosum in the urethroplasty, (3) midline urethral plate incision in glans (hinging the plate), (4) Dartos blanket cover on whole urethroplasty. Out of 136 children with proximal hypospadias, 92 children who underwent TPOIF formed the study group. Out of 92 children, 48 (52 %) children required a tunica albuginea plication for chordee correction. In total, 16 (17 %) patients developed 24 complications and 11 children (12 %) required second surgeries: fistula closure in 7 (with meatoplasty in 5), glansplasty for glans dehiscence in 2 and excision of diverticulum in 2. Two children required a third surgery. Only 5 children had a noticeable penile torsion (less than 30 degree), and 7 had a patulous meatus. Transverse preputial onlay island flap urethroplasty can deliver reliable cosmetic and functional outcomes in proximal hypospadias.

  5. Multi Car Elevator Control by using Learning Automaton

    NASA Astrophysics Data System (ADS)

    Shiraishi, Kazuaki; Hamagami, Tomoki; Hirata, Hironori

    We study an adaptive control technique for multi car elevators (MCEs) by adopting learning automatons (LAs.) The MCE is a high performance and a near-future elevator system with multi shafts and multi cars. A strong point of the system is that realizing a large carrying capacity in small shaft area. However, since the operation is too complicated, realizing an efficient MCE control is difficult for top-down approaches. For example, “bunching up together" is one of the typical phenomenon in a simple traffic environment like the MCE. Furthermore, an adapting to varying environment in configuration requirement is a serious issue in a real elevator service. In order to resolve these issues, having an autonomous behavior is required to the control system of each car in MCE system, so that the learning automaton, as the solutions for this requirement, is supposed to be appropriate for the simple traffic control. First, we assign a stochastic automaton (SA) to each car control system. Then, each SA varies its stochastic behavior distributions for adapting to environment in which its policy is evaluated with each passenger waiting times. That is LA which learns the environment autonomously. Using the LA based control technique, the MCE operation efficiency is evaluated through simulation experiments. Results show the technique enables reducing waiting times efficiently, and we confirm the system can adapt to the dynamic environment.

  6. Coarse-Grain Bandwidth Estimation Scheme for Large-Scale Network

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Jennings, Esther H.; Sergui, John S.

    2013-01-01

    A large-scale network that supports a large number of users can have an aggregate data rate of hundreds of Mbps at any time. High-fidelity simulation of a large-scale network might be too complicated and memory-intensive for typical commercial-off-the-shelf (COTS) tools. Unlike a large commercial wide-area-network (WAN) that shares diverse network resources among diverse users and has a complex topology that requires routing mechanism and flow control, the ground communication links of a space network operate under the assumption of a guaranteed dedicated bandwidth allocation between specific sparse endpoints in a star-like topology. This work solved the network design problem of estimating the bandwidths of a ground network architecture option that offer different service classes to meet the latency requirements of different user data types. In this work, a top-down analysis and simulation approach was created to size the bandwidths of a store-and-forward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. These techniques were used to estimate the WAN bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network. A new analytical approach, called the "leveling scheme," was developed to model the store-and-forward mechanism of the network data flow. The term "leveling" refers to the spreading of data across a longer time horizon without violating the corresponding latency requirement of the data type. Two versions of the leveling scheme were developed: 1. A straightforward version that simply spreads the data of each data type across the time horizon and doesn't take into account the interactions among data types within a pass, or between data types across overlapping passes at a network node, and is inherently sub-optimal. 2. Two-state Markov leveling scheme that takes into account the second order behavior of the store-and-forward mechanism, and the interactions among data types within a pass. The novelty of this approach lies in the modeling of the store-and-forward mechanism of each network node. The term store-and-forward refers to the data traffic regulation technique in which data is sent to an intermediate network node where they are temporarily stored and sent at a later time to the destination node or to another intermediate node. Store-and-forward can be applied to both space-based networks that have intermittent connectivity, and ground-based networks with deterministic connectivity. For groundbased networks, the store-and-forward mechanism is used to regulate the network data flow and link resource utilization such that the user data types can be delivered to their destination nodes without violating their respective latency requirements.

  7. Assessing Requirements Quality through Requirements Coverage

    NASA Technical Reports Server (NTRS)

    Rajan, Ajitha; Heimdahl, Mats; Woodham, Kurt

    2008-01-01

    In model-based development, the development effort is centered around a formal description of the proposed software system the model. This model is derived from some high-level requirements describing the expected behavior of the software. For validation and verification purposes, this model can then be subjected to various types of analysis, for example, completeness and consistency analysis [6], model checking [3], theorem proving [1], and test-case generation [4, 7]. This development paradigm is making rapid inroads in certain industries, e.g., automotive, avionics, space applications, and medical technology. This shift towards model-based development naturally leads to changes in the verification and validation (V&V) process. The model validation problem determining that the model accurately captures the customer's high-level requirements has received little attention and the sufficiency of the validation activities has been largely determined through ad-hoc methods. Since the model serves as the central artifact, its correctness with respect to the users needs is absolutely crucial. In our investigation, we attempt to answer the following two questions with respect to validation (1) Are the requirements sufficiently defined for the system? and (2) How well does the model implement the behaviors specified by the requirements? The second question can be addressed using formal verification. Nevertheless, the size and complexity of many industrial systems make formal verification infeasible even if we have a formal model and formalized requirements. Thus, presently, there is no objective way of answering these two questions. To this end, we propose an approach based on testing that, when given a set of formal requirements, explores the relationship between requirements-based structural test-adequacy coverage and model-based structural test-adequacy coverage. The proposed technique uses requirements coverage metrics defined in [9] on formal high-level software requirements and existing model coverage metrics such as the Modified Condition and Decision Coverage (MC/DC) used when testing highly critical software in the avionics industry [8]. Our work is related to Chockler et al. [2], but we base our work on traditional testing techniques as opposed to verification techniques.

  8. Computational aspects of sensitivity calculations in linear transient structural analysis. Ph.D. Thesis - Virginia Polytechnic Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1990-01-01

    A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.

  9. CrossTalk. The Journal of Defense Software Engineering. Volume 13, Number 6, June 2000

    DTIC Science & Technology

    2000-06-01

    Techniques for Efficiently Generating and Testing Software This paper presents a proven process that uses advanced tools to design, develop and test... optimal software. by Keith R. Wegner Large Software Systems—Back to Basics Development methods that work on small problems seem to not scale well to...Ability Requirements for Teamwork: Implications for Human Resource Management, Journal of Management, Vol. 20, No. 2, 1994. 11. Ferguson, Pat, Watts S

  10. Remote sensing in biological oceanography

    NASA Technical Reports Server (NTRS)

    Esaias, W. E.

    1981-01-01

    The main attribute of remote sensing is seen as its ability to measure distributions over large areas on a synoptic basis and to repeat this coverage at required time periods. The way in which the Coastal Zone Color Scanner, by showing the distribution of chlorophyll a, can locate areas productive in both phytoplankton and fishes is described. Lidar techniques are discussed, and it is pointed out that lidar will increase the depth range for observations.

  11. AFRL’s HP3 60mm Powder Gun

    DTIC Science & Technology

    2012-08-08

    reminiscent of wood -grain. It is unknown what effect the different construction techniques will have on the material’s suitability as a projectile...large plastic bar and mounts on the rear of the target plate are the interferometry probe holder (and probe). 11 Distribution Statement A. Approval...required to extrude the tapered boot through the tapering cone of the orifice plate, the breech pressure pulse shape (how long the high pressures are

  12. Otorhinolaryngological aspects of sleep-related breathing disorders

    PubMed Central

    Virk, Jagdeep S.

    2016-01-01

    Snoring and obstructive sleep apnoea (OSA) are disorders within a wide spectrum of sleep-related breathing disorders (SRBD). Given the obesity epidemic, these conditions will become increasingly prevalent and continue to serve as a large economic burden. A thorough clinical evaluation and appropriate investigations will allow stratification of patients into appropriate treatment groups. A multidisciplinary team is required to manage these patients. Patient selection is critical in ensuring successful surgical and non-surgical outcomes. A wide range of options are available and further long term prospective studies, with standardised data capture and outcome goals, are required to evaluate the most appropriate techniques and long term success rates. PMID:26904262

  13. ''Virtual Welding,'' a new aid for teaching Manufacturing Process Engineering

    NASA Astrophysics Data System (ADS)

    Portela, José M.; Huerta, María M.; Pastor, Andrés; Álvarez, Miguel; Sánchez-Carrilero, Manuel

    2009-11-01

    Overcrowding in the classroom is a serious problem in universities, particularly in specialties that require a certain type of teaching practice. These practices often require expenditure on consumables and a space large enough to hold the necessary materials and the materials that have already been used. Apart from the budget, another problem concerns the attention paid to each student. The use of simulation systems in the early learning stages of the welding technique can prove very beneficial thanks to error detection functions installed in the system, which provide the student with feedbach during the execution of the practice session, and the significant savings in both consumables and energy.

  14. Parallel In Vivo DNA Assembly by Recombination: Experimental Demonstration and Theoretical Approaches

    PubMed Central

    Shi, Zhenyu; Wedd, Anthony G.; Gras, Sally L.

    2013-01-01

    The development of synthetic biology requires rapid batch construction of large gene networks from combinations of smaller units. Despite the availability of computational predictions for well-characterized enzymes, the optimization of most synthetic biology projects requires combinational constructions and tests. A new building-brick-style parallel DNA assembly framework for simple and flexible batch construction is presented here. It is based on robust recombination steps and allows a variety of DNA assembly techniques to be organized for complex constructions (with or without scars). The assembly of five DNA fragments into a host genome was performed as an experimental demonstration. PMID:23468883

  15. Rapid prototyping and AI programming environments applied to payload modeling

    NASA Technical Reports Server (NTRS)

    Carnahan, Richard S., Jr.; Mendler, Andrew P.

    1987-01-01

    This effort focused on using artificial intelligence (AI) programming environments and rapid prototyping to aid in both space flight manned and unmanned payload simulation and training. Significant problems addressed are the large amount of development time required to design and implement just one of these payload simulations and the relative inflexibility of the resulting model to accepting future modification. Results of this effort have suggested that both rapid prototyping and AI programming environments can significantly reduce development time and cost when applied to the domain of payload modeling for crew training. The techniques employed are applicable to a variety of domains where models or simulations are required.

  16. Design and implementation of robust decentralized control laws for the ACES structure at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Collins, Emmanuel G., Jr.; Phillips, Douglas J.; Hyland, David C.

    1990-01-01

    Many large space system concepts will require active vibration control to satisfy critical performance requirements such as line-of-sight accuracy. In order for these concepts to become operational it is imperative that the benefits of active vibration control be practically demonstrated in ground based experiments. The results of the experiment successfully demonstrate active vibration control for a flexible structure. The testbed is the Active Control Technique Evaluation for Spacecraft (ACES) structure at NASA Marshall Space Flight Center. The ACES structure is dynamically traceable to future space systems and especially allows the study of line-of-sight control issues.

  17. Optimization Based Efficiencies in First Order Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Peck, Jeffrey A.; Mahadevan, Sankaran

    2003-01-01

    This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.

  18. Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations.

    PubMed

    Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot

    2013-10-01

    Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios.

  19. Installation Status of the Electron Beam Profiler for the Fermilab Main Injector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thurman-Keup, R.; Alvarez, M.; Fitzgerald, J.

    2015-11-06

    The planned neutrino program at Fermilab requires large proton beam intensities in excess of 2 MW. Measuring the transverse profiles of these high intensity beams is challenging and often depends on non-invasive techniques. One such technique involves measuring the deflection of a probe beam of electrons with a trajectory perpendicular to the proton beam. A device such as this is already in use at the Spallation Neutron Source at ORNL and the installation of a similar device is underway in the Main Injector at Fermilab. The present installation status of the electron beam profiler for the Main Injector will bemore » discussed together with some simulations and test stand results.« less

  20. Software manual for operating particle displacement tracking data acquisition and reduction system

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.

    1991-01-01

    The software manual is presented. The necessary steps required to record, analyze, and reduce Particle Image Velocimetry (PIV) data using the Particle Displacement Tracking (PDT) technique are described. The new PDT system is an all electronic technique employing a CCD video camera and a large memory buffer frame-grabber board to record low velocity (less than or equal to 20 cm/s) flows. Using a simple encoding scheme, a time sequence of single exposure images are time coded into a single image and then processed to track particle displacements and determine 2-D velocity vectors. All the PDT data acquisition, analysis, and data reduction software is written to run on an 80386 PC.

Top