Science.gov

Sample records for advanced large-scale manufacturing

  1. Advances in compact manufacturing for shape and performance controllability of large-scale components-a review

    NASA Astrophysics Data System (ADS)

    Qin, Fangcheng; Li, Yongtang; Qi, Huiping; Ju, Li

    2017-01-01

    Research on compact manufacturing technology for shape and performance controllability of metallic components can realize the simplification and high-reliability of manufacturing process on the premise of satisfying the requirement of macro/micro-structure. It is not only the key paths in improving performance, saving material and energy, and green manufacturing of components used in major equipments, but also the challenging subjects in frontiers of advanced plastic forming. To provide a novel horizon for the manufacturing in the critical components is significant. Focused on the high-performance large-scale components such as bearing rings, flanges, railway wheels, thick-walled pipes, etc, the conventional processes and their developing situations are summarized. The existing problems including multi-pass heating, wasting material and energy, high cost and high-emission are discussed, and the present study unable to meet the manufacturing in high-quality components is also pointed out. Thus, the new techniques related to casting-rolling compound precise forming of rings, compact manufacturing for duplex-metal composite rings, compact manufacturing for railway wheels, and casting-extruding continuous forming of thick-walled pipes are introduced in detail, respectively. The corresponding research contents, such as casting ring blank, hot ring rolling, near solid-state pressure forming, hot extruding, are elaborated. Some findings in through-thickness microstructure evolution and mechanical properties are also presented. The components produced by the new techniques are mainly characterized by fine and homogeneous grains. Moreover, the possible directions for further development of those techniques are suggested. Finally, the key scientific problems are first proposed. All of these results and conclusions have reference value and guiding significance for the integrated control of shape and performance in advanced compact manufacturing.

  2. Large-scale Advanced Propfan (LAP) program

    NASA Technical Reports Server (NTRS)

    Sagerser, D. A.; Ludemann, S. G.

    1985-01-01

    The propfan is an advanced propeller concept which maintains the high efficiencies traditionally associated with conventional propellers at the higher aircraft cruise speeds associated with jet transports. The large-scale advanced propfan (LAP) program extends the research done on 2 ft diameter propfan models to a 9 ft diameter article. The program includes design, fabrication, and testing of both an eight bladed, 9 ft diameter propfan, designated SR-7L, and a 2 ft diameter aeroelastically scaled model, SR-7A. The LAP program is complemented by the propfan test assessment (PTA) program, which takes the large-scale propfan and mates it with a gas generator and gearbox to form a propfan propulsion system and then flight tests this system on the wing of a Gulfstream 2 testbed aircraft.

  3. Large Scale Composite Manufacturing for Heavy Lift Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Stavana, Jacob; Cohen, Leslie J.; Houseal, Keth; Pelham, Larry; Lort, Richard; Zimmerman, Thomas; Sutter, James; Western, Mike; Harper, Robert; Stuart, Michael

    2012-01-01

    Risk reduction for the large scale composite manufacturing is an important goal to produce light weight components for heavy lift launch vehicles. NASA and an industry team successfully employed a building block approach using low-cost Automated Tape Layup (ATL) of autoclave and Out-of-Autoclave (OoA) prepregs. Several large, curved sandwich panels were fabricated at HITCO Carbon Composites. The aluminum honeycomb core sandwich panels are segments of a 1/16th arc from a 10 meter cylindrical barrel. Lessons learned highlight the manufacturing challenges required to produce light weight composite structures such as fairings for heavy lift launch vehicles.

  4. A new large-scale manufacturing platform for complex biopharmaceuticals.

    PubMed

    Vogel, Jens H; Nguyen, Huong; Giovannini, Roberto; Ignowski, Jolene; Garger, Steve; Salgotra, Anil; Tom, Jennifer

    2012-12-01

    Complex biopharmaceuticals, such as recombinant blood coagulation factors, are addressing critical medical needs and represent a growing multibillion-dollar market. For commercial manufacturing of such, sometimes inherently unstable, molecules it is important to minimize product residence time in non-ideal milieu in order to obtain acceptable yields and consistently high product quality. Continuous perfusion cell culture allows minimization of residence time in the bioreactor, but also brings unique challenges in product recovery, which requires innovative solutions. In order to maximize yield, process efficiency, facility and equipment utilization, we have developed, scaled-up and successfully implemented a new integrated manufacturing platform in commercial scale. This platform consists of a (semi-)continuous cell separation process based on a disposable flow path and integrated with the upstream perfusion operation, followed by membrane chromatography on large-scale adsorber capsules in rapid cycling mode. Implementation of the platform at commercial scale for a new product candidate led to a yield improvement of 40% compared to the conventional process technology, while product quality has been shown to be more consistently high. Over 1,000,000 L of cell culture harvest have been processed with 100% success rate to date, demonstrating the robustness of the new platform process in GMP manufacturing. While membrane chromatography is well established for polishing in flow-through mode, this is its first commercial-scale application for bind/elute chromatography in the biopharmaceutical industry and demonstrates its potential in particular for manufacturing of potent, low-dose biopharmaceuticals.

  5. Process monitoring during manufacturing of large-scale composite parts

    NASA Astrophysics Data System (ADS)

    Heider, Dirk; Eckel, Douglas A., II; Don, Roderic C.; Fink, Bruce K.; Gillespie, John W., Jr.

    1999-01-01

    One of the inherent problems with the processing of composites is the development of internal stresses and the resulting warpage, which results in out-of-tolerance components. This investigation examines possible fiber-optic sensor methods, which can be applied to measure internal strain and thus residual stress during production. Extrinsic Fabry-Perot Interferometers (EFPI) and Bragg gratings are utilizes to monitor the strain behavior during manufacturing of large-scale composite parts. Initially, a 24 in X 18 in X 1 in thick part was manufactured using the vacuum- assisted resin transfer molding (VARTM) technique. In this part, one Bragg grating, multiple thermocouples and a resin flow sensor (SMARTweave) were integrate to measure the flow and cure behavior during production. An AGEMA thermal image camera verified the temperature history on the part surface. In addition, several EFPI's and Bragg gratings were implemented into three temperature history on the part surface. In addition, several EFPI's and Bragg gratings were implemented into three 13 ft X 32 ft X 20.3 in civilian bridge deck test specimens manufactured with the VARTM process. The Bragg gratings showed great promise to capture the changes in strain due to the residual stress during cure. The actual implementation of fiber optics into large composite parts is a challenge and the problems of sensor survivability in these parts are addressed in this study. The fiber optic measurements in combination with SMARTweave's ability to monitor flow could lead to a sensor system, which allows feedback for process control of the VARTM technique. In addition, the optical fibers will be used for health monitoring during the lifetime of the part.

  6. Workshop Report on Additive Manufacturing for Large-Scale Metal Components - Development and Deployment of Metal Big-Area-Additive-Manufacturing (Large-Scale Metals AM) System

    SciTech Connect

    Babu, Sudarsanam Suresh; Love, Lonnie J.; Peter, William H.; Dehoff, Ryan

    2016-05-01

    Additive manufacturing (AM) is considered an emerging technology that is expected to transform the way industry can make low-volume, high value complex structures. This disruptive technology promises to replace legacy manufacturing methods for the fabrication of existing components in addition to bringing new innovation for new components with increased functional and mechanical properties. This report outlines the outcome of a workshop on large-scale metal additive manufacturing held at Oak Ridge National Laboratory (ORNL) on March 11, 2016. The charter for the workshop was outlined by the Department of Energy (DOE) Advanced Manufacturing Office program manager. The status and impact of the Big Area Additive Manufacturing (BAAM) for polymer matrix composites was presented as the background motivation for the workshop. Following, the extension of underlying technology to low-cost metals was proposed with the following goals: (i) High deposition rates (approaching 100 lbs/h); (ii) Low cost (<$10/lbs) for steel, iron, aluminum, nickel, as well as, higher cost titanium, (iii) large components (major axis greater than 6 ft) and (iv) compliance of property requirements. The above concept was discussed in depth by representatives from different industrial sectors including welding, metal fabrication machinery, energy, construction, aerospace and heavy manufacturing. In addition, DOE’s newly launched High Performance Computing for Manufacturing (HPC4MFG) program was reviewed. This program will apply thermo-mechanical models to elucidate deeper understanding of the interactions between design, process, and materials during additive manufacturing. Following these presentations, all the attendees took part in a brainstorming session where everyone identified the top 10 challenges in large-scale metal AM from their own perspective. The feedback was analyzed and grouped in different categories including, (i) CAD to PART software, (ii) selection of energy source, (iii

  7. Large-Scale Advanced Prop-Fan (LAP)

    NASA Technical Reports Server (NTRS)

    Degeorge, C. L.

    1988-01-01

    In recent years, considerable attention has been directed toward improving aircraft fuel efficiency. Analytical studies and research with wind tunnel models have demonstrated that the high inherent efficiency of low speed turboprop propulsion systems may now be extended to the Mach .8 flight regime of today's commercial airliners. This can be accomplished with a propeller, employing a large number of thin highly swept blades. The term Prop-Fan has been coined to describe such a propulsion system. In 1983 the NASA-Lewis Research Center contracted with Hamilton Standard to design, build and test a near full scale Prop-Fan, designated the Large Scale Advanced Prop-Fan (LAP). This report provides a detailed description of the LAP program. The assumptions and analytical procedures used in the design of Prop-Fan system components are discussed in detail. The manufacturing techniques used in the fabrication of the Prop-Fan are presented. Each of the tests run during the course of the program are also discussed and the major conclusions derived from them stated.

  8. Manufacturing Process Simulation of Large-Scale Cryotanks

    NASA Technical Reports Server (NTRS)

    Babai, Majid; Phillips, Steven; Griffin, Brian

    2003-01-01

    NASA's Space Launch Initiative (SLI) is an effort to research and develop the technologies needed to build a second-generation reusable launch vehicle. It is required that this new launch vehicle be 100 times safer and 10 times cheaper to operate than current launch vehicles. Part of the SLI includes the development of reusable composite and metallic cryotanks. The size of these reusable tanks is far greater than anything ever developed and exceeds the design limits of current manufacturing tools. Several design and manufacturing approaches have been formulated, but many factors must be weighed during the selection process. Among these factors are tooling reachability, cycle times, feasibility, and facility impacts. The manufacturing process simulation capabilities available at NASA.s Marshall Space Flight Center have played a key role in down selecting between the various manufacturing approaches. By creating 3-D manufacturing process simulations, the varying approaches can be analyzed in a virtual world before any hardware or infrastructure is built. This analysis can detect and eliminate costly flaws in the various manufacturing approaches. The simulations check for collisions between devices, verify that design limits on joints are not exceeded, and provide cycle times which aide in the development of an optimized process flow. In addition, new ideas and concerns are often raised after seeing the visual representation of a manufacturing process flow. The output of the manufacturing process simulations allows for cost and safety comparisons to be performed between the various manufacturing approaches. This output helps determine which manufacturing process options reach the safety and cost goals of the SLI. As part of the SLI, The Boeing Company was awarded a basic period contract to research and propose options for both a metallic and a composite cryotank. Boeing then entered into a task agreement with the Marshall Space Flight Center to provide manufacturing

  9. Manufacturing Process Simulation of Large-Scale Cryotanks

    NASA Technical Reports Server (NTRS)

    Babai, Majid; Phillips, Steven; Griffin, Brian; Munafo, Paul M. (Technical Monitor)

    2002-01-01

    NASA's Space Launch Initiative (SLI) is an effort to research and develop the technologies needed to build a second-generation reusable launch vehicle. It is required that this new launch vehicle be 100 times safer and 10 times cheaper to operate than current launch vehicles. Part of the SLI includes the development of reusable composite and metallic cryotanks. The size of these reusable tanks is far greater than anything ever developed and exceeds the design limits of current manufacturing tools. Several design and manufacturing approaches have been formulated, but many factors must be weighed during the selection process. Among these factors are tooling reachability, cycle times, feasibility, and facility impacts. The manufacturing process simulation capabilities available at NASA's Marshall Space Flight Center have played a key role in down selecting between the various manufacturing approaches. By creating 3-D manufacturing process simulations, the varying approaches can be analyzed in a virtual world before any hardware or infrastructure is built. This analysis can detect and eliminate costly flaws in the various manufacturing approaches. The simulations check for collisions between devices, verify that design limits on joints are not exceeded, and provide cycle times which aid in the development of an optimized process flow. In addition, new ideas and concerns are often raised after seeing the visual representation of a manufacturing process flow. The output of the manufacturing process simulations allows for cost and safety comparisons to be performed between the various manufacturing approaches. This output helps determine which manufacturing process options reach the safety and cost goals of the SLI.

  10. THE DURABILITY OF LARGE-SCALE ADDITIVE MANUFACTURING COMPOSITE MOLDS

    SciTech Connect

    Post, Brian K; Love, Lonnie J; Duty, Chad; Vaidya, Uday; Pipes, R. Byron; Kunc, Vlastimil

    2016-01-01

    Oak Ridge National Laboratory s Big Area Additive Manufacturing (BAAM) technology permits the rapid production of thermoplastic composite molds using a carbon fiber filled Acrylonitrile-Butadiene-Styrene (ABS) thermoplastic. Demonstration tools (i.e. 0.965 m X 0.559 m X 0.152 m) for composite part fabrication have been printed, coated, and finished with a traditional tooling gel. We present validation results demonstrating the stability of thermoplastic printed molds for room temperature Vacuum Assisted Resin Transfer Molding (VARTM) processes. Arkema s Elium thermoplastic resin was investigated with a variety of reinforcement materials. Experimental results include dimensional characterization of the tool surface using laser scanning technique following demolding of 10 parts. Thermoplastic composite molds offer rapid production compared to traditionally built thermoset molds in that near-net deposition allows direct digital production of the net geometry at production rate of 45 kg/hr.

  11. Active assembly for large-scale manufacturing of integrated nanostructures.

    SciTech Connect

    Spoerke, Erik David; Bunker, Bruce Conrad; Orendorff, Christopher J.; Bachand, George David; Hendricks, Judy K.; Matzke, Carolyn M.

    2007-01-01

    Microtubules and motor proteins are protein-based biological agents that work cooperatively to facilitate the organization and transport of nanomaterials within living organisms. This report describes the application of these biological agents as tools in a novel, interdisciplinary scheme for assembling integrated nanostructures. Specifically, selective chemistries were used to direct the favorable adsorption of active motor proteins onto lithographically-defined gold electrodes. Taking advantage of the specific affinity these motor proteins have for microtubules, the motor proteins were used to capture polymerized microtubules out of suspension to form dense patterns of microtubules and microtubule bridges between gold electrodes. These microtubules were then used as biofunctionalized templates to direct the organization of functionalized nanocargo including single-walled carbon nanotubes and gold nanoparticles. This biologically-mediated scheme for nanomaterials assembly has shown excellent promise as a foundation for developing new biohybrid approaches to nanoscale manufacturing.

  12. Case Study: Commercialization of sweet sorghum juice clarification for large-scale syrup manufacture

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The precipitation and burning of insoluble granules of starch from sweet sorghum juice on heating coils prevented the large scale manufacture of syrup at a new industrial plant in Missouri, USA. To remove insoluble starch granules, a series of small and large-scale experiments were conducted at the...

  13. Large-Scale Advanced Prop-Fan (LAP) blade design

    NASA Technical Reports Server (NTRS)

    Violette, John A.; Sullivan, William E.; Turnberg, Jay E.

    1984-01-01

    This report covers the design analysis of a very thin, highly swept, propeller blade to be used in the Large-Scale Advanced Prop-Fan (LAP) test program. The report includes: design requirements and goals, a description of the blade configuration which meets requirements, a description of the analytical methods utilized/developed to demonstrate compliance with the requirements, and the results of these analyses. The methods described include: finite element modeling, predicted aerodynamic loads and their application to the blade, steady state and vibratory response analyses, blade resonant frequencies and mode shapes, bird impact analysis, and predictions of stalled and unstalled flutter phenomena. Summarized results include deflections, retention loads, stress/strength comparisons, foreign object damage resistance, resonant frequencies and critical speed margins, resonant vibratory mode shapes, calculated boundaries of stalled and unstalled flutter, and aerodynamic and acoustic performance calculations.

  14. Advanced Manufacturing

    DTIC Science & Technology

    2002-01-01

    manufacturing will enable the mass customization of products and create new market opportunities in the commercial sector. Flexible manufacturing ...the mass customization of products and create new market opportunities in the commercial sector. One of the most promising flexible manufacturing ... manufacturing , increase efficiency and productivity. Research in leading edge technologies continues to promise exciting new manufacturing methods

  15. Current situation of the development and manufacture of vary large scale integrated devices in China

    NASA Astrophysics Data System (ADS)

    Yubiao, He

    1988-06-01

    The manufacture of Large Scale Integration (LSI) and Very Large Scale Integration (VLSI) devices in foreign countries is a highly competitive high-tech industry. It requires high-precision manufacturing technology, and very expensive manufacturing equipment. Therefore, it is impossible to conduct research and form industrial production capability by merely relying on obsolete manufacturing equipment and semi-manual production techniques. According to the experience of our foreign counterparts and based on our current situation, it is highly desirable for domestic LSI and VLSI research institutes and manufacturers to establish unified development-manufacturing units, concentrate resources, amass available funds to upgrade equipment and technology, improve management, conduct theoretical research, and develop new technology and new devices under a unified planning and assigned responsibility. It is only in this way that we can reduce the gap between domestic and foreign VLSI device industries, and promote our micro-electronic industry. This should be the trend for the development of the microelectronic industry in China.

  16. Large-scale Advanced Prop-fan (LAP) technology assessment report

    NASA Technical Reports Server (NTRS)

    Degeorge, C. L.

    1988-01-01

    The technologically significant findings and accomplishments of the Large Scale Advanced Prop-Fan (LAP) program in the areas of aerodynamics, aeroelasticity, acoustics and materials and fabrication are described. The extent to which the program goals related to these disciplines were achieved is discussed, and recommendations for additional research are presented. The LAP program consisted of the design, manufacture and testing of a near full-scale Prop-Fan or advanced turboprop capable of operating efficiently at speeds to Mach .8. An aeroelastically scaled model of the LAP was also designed and fabricated. The goal of the program was to acquire data on Prop-Fan performance that would indicate the technology readiness of Prop-Fans for practical applications in commercial and military aviation.

  17. Advanced Manufacturing Technologies

    NASA Technical Reports Server (NTRS)

    Fikes, John

    2016-01-01

    Advanced Manufacturing Technologies (AMT) is developing and maturing innovative and advanced manufacturing technologies that will enable more capable and lower-cost spacecraft, launch vehicles and infrastructure to enable exploration missions. The technologies will utilize cutting edge materials and emerging capabilities including metallic processes, additive manufacturing, composites, and digital manufacturing. The AMT project supports the National Manufacturing Initiative involving collaboration with other government agencies.

  18. Manufacture of large-scale lightweight SiC mirror for space

    NASA Astrophysics Data System (ADS)

    Huang, Zhengren; Liu, Guiling; Liu, Xuejian; Chen, Zhongming; Jiang, Dongliang

    2012-01-01

    Large-scale lightweight silicon carbide (SiC) mirrors were manufactured for space. Sintered SiC (SSiC) ceramic was adopted as the material to manufacture these mirrors. Complex structure designed for highly weight reduction and installation requirements was near-net-shape formed on SiC green body by digital machining technique before the high temperature sintering process. The dimensional accuracy of thin ribs and faceplate can be precisely controlled above 99.5%. During sintering process, the temperature distribution was kept uniform enough to avoid residual stress and deformation in the whole furnace. Isotropic shrinkage occurs during densification from SiC green body to ceramic with a fluctuation less than 0.3%, which is the dimension error of the final size as well. Mirror surface with low surface roughness, high shape accuracy and reflectivity was finished by polishing and plating. Moreover, large-scale lightweight SSiC mirror was demonstrated to be suitable for space use by tests simulating launch conditions and space environments.

  19. Large-scale photonic integration for advanced all-optical routing functions

    NASA Astrophysics Data System (ADS)

    Nicholes, Steven C.

    Advanced InP-based photonic integrated circuits are a critical technology to manage the increasing bandwidth demands of next-generation all-optical networks. Integrating many of the discrete functions required in optical networks into a single device provides a reduction in system footprint and optical losses by eliminating the fiber coupling junctions between components. This translates directly into increased system reliability and cost savings. Although many key network components have been realized via InP-based monolithic integration over the years, truly large-scale photonic ICs have only recently emerged in the marketplace. This lag-time has been mostly due to historically low device yields. In all-optical routing applications, large-scale photonic ICs may be able to address two of the key roadblocks associated with scaling modern electronic routers to higher capacities---namely, power and size. If the functions of dynamic wavelength conversion and routing are moved to the optical layer, we can eliminate the need for power-hungry optical-to-electrical (O/E) and electrical-to-optical (E/O) data conversions at each router node. Additionally, large-scale photonic ICs could reduce the footprint of such a system by combining the similar functions of each port onto a single chip. However, robust design and manufacturing techniques that will enable high-yield production of these chips must be developed. In this work, we demonstrate a monolithic tunable optical router (MOTOR) chip consisting of an array of eight 40-Gbps wavelength converters and a passive arrayed-waveguide grating router that functions as the packet-forwarding switch fabric of an all-optical router. The device represents one of the most complex InP photonic ICs ever reported, with more than 200 integrated functional elements in a single chip. Single-channel 40 Gbps wavelength conversion and channel switching using 231-1 PRBS data showed a power penalty as low as 4.5 dB with less than 2 W drive power

  20. Spraying Techniques for Large Scale Manufacturing of PEM-FC Electrodes

    NASA Astrophysics Data System (ADS)

    Hoffman, Casey J.

    Fuel cells are highly efficient energy conversion devices that represent one part of the solution to the world's current energy crisis in the midst of global climate change. When supplied with the necessary reactant gasses, fuel cells produce only electricity, heat, and water. The fuel used, namely hydrogen, is available from many sources including natural gas and the electrolysis of water. If the electricity for electrolysis is generated by renewable energy (e.g., solar and wind power), fuel cells represent a completely 'green' method of producing electricity. The thought of being able to produce electricity to power homes, vehicles, and other portable or stationary equipment with essentially zero environmentally harmful emissions has been driving academic and industrial fuel cell research and development with the goal of successfully commercializing this technology. Unfortunately, fuel cells cannot achieve any appreciable market penetration at their current costs. The author's hypothesis is that: the development of automated, non-contact deposition methods for electrode manufacturing will improve performance and process flexibility, thereby helping to accelerate the commercialization of PEMFC technology. The overarching motivation for this research was to lower the cost of manufacturing fuel cell electrodes and bring the technology one step closer to commercial viability. The author has proven this hypothesis through a detailed study of two non-contact spraying methods. These scalable deposition systems were incorporated into an automated electrode manufacturing system that was designed and built by the author for this research. The electrode manufacturing techniques developed by the author have been shown to produce electrodes that outperform a common lab-scale contact method that was studied as a baseline, as well as several commercially available electrodes. In addition, these scalable, large scale electrode manufacturing processes developed by the author are

  1. Drivers and barriers to e-invoicing adoption in Greek large scale manufacturing industries

    SciTech Connect

    Marinagi, Catherine E-mail: ptrivel@yahoo.com Trivellas, Panagiotis E-mail: ptrivel@yahoo.com Reklitis, Panagiotis E-mail: ptrivel@yahoo.com; Skourlas, Christos

    2015-02-09

    This paper attempts to investigate the drivers and barriers that large-scale Greek manufacturing industries experience in adopting electronic invoices (e-invoices), based on three case studies with organizations having international presence in many countries. The study focuses on the drivers that may affect the increase of the adoption and use of e-invoicing, including the customers demand for e-invoices, and sufficient know-how and adoption of e-invoicing in organizations. In addition, the study reveals important barriers that prevent the expansion of e-invoicing, such as suppliers’ reluctance to implement e-invoicing, and IT infrastructures incompatibilities. Other issues examined by this study include the observed benefits from e-invoicing implementation, and the financial priorities of the organizations assumed to be supported by e-invoicing.

  2. Drivers and barriers to e-invoicing adoption in Greek large scale manufacturing industries

    NASA Astrophysics Data System (ADS)

    Marinagi, Catherine; Trivellas, Panagiotis; Reklitis, Panagiotis; Skourlas, Christos

    2015-02-01

    This paper attempts to investigate the drivers and barriers that large-scale Greek manufacturing industries experience in adopting electronic invoices (e-invoices), based on three case studies with organizations having international presence in many countries. The study focuses on the drivers that may affect the increase of the adoption and use of e-invoicing, including the customers demand for e-invoices, and sufficient know-how and adoption of e-invoicing in organizations. In addition, the study reveals important barriers that prevent the expansion of e-invoicing, such as suppliers' reluctance to implement e-invoicing, and IT infrastructures incompatibilities. Other issues examined by this study include the observed benefits from e-invoicing implementation, and the financial priorities of the organizations assumed to be supported by e-invoicing.

  3. Advanced Connectivity Analysis (ACA): a Large Scale Functional Connectivity Data Mining Environment.

    PubMed

    Chen, Rong; Nixon, Erika; Herskovits, Edward

    2016-04-01

    Using resting-state functional magnetic resonance imaging (rs-fMRI) to study functional connectivity is of great importance to understand normal development and function as well as a host of neurological and psychiatric disorders. Seed-based analysis is one of the most widely used rs-fMRI analysis methods. Here we describe a freely available large scale functional connectivity data mining software package called Advanced Connectivity Analysis (ACA). ACA enables large-scale seed-based analysis and brain-behavior analysis. It can seamlessly examine a large number of seed regions with minimal user input. ACA has a brain-behavior analysis component to delineate associations among imaging biomarkers and one or more behavioral variables. We demonstrate applications of ACA to rs-fMRI data sets from a study of autism.

  4. Challenges and advances in large-scale DFT calculations on GPUs

    NASA Astrophysics Data System (ADS)

    Kulik, Heather

    2014-03-01

    Recent advances in reformulating electronic structure algorithms for stream processors such as graphical processing units have made DFT calculations on systems comprising up to O(103) atoms feasible. Simulations on such systems that previously required half a week on traditional processors can now be completed in only half an hour. Here, we leverage these GPU-accelerated quantum chemistry methods to investigate large-scale quantum mechanical features in protein structure, mechanochemical depolymerization, and the nucleation and growth of heterogeneous nanoparticle structures. In each case, large-scale and rapid evaluation of electronic structure properties is critical for unearthing previously poorly understood properties and mechanistic features of these systems. We will also discuss outstanding challenges in the use of Gaussian localized-basis-set codes on GPUs pertaining to limitations in basis set size and how we circumvent such challenges to computational efficiency with systematic, physics-based error corrections to basis set incompleteness.

  5. Advances in Parallelization for Large Scale Oct-Tree Mesh Generation

    NASA Technical Reports Server (NTRS)

    O'Connell, Matthew; Karman, Steve L.

    2015-01-01

    Despite great advancements in the parallelization of numerical simulation codes over the last 20 years, it is still common to perform grid generation in serial. Generating large scale grids in serial often requires using special "grid generation" compute machines that can have more than ten times the memory of average machines. While some parallel mesh generation techniques have been proposed, generating very large meshes for LES or aeroacoustic simulations is still a challenging problem. An automated method for the parallel generation of very large scale off-body hierarchical meshes is presented here. This work enables large scale parallel generation of off-body meshes by using a novel combination of parallel grid generation techniques and a hybrid "top down" and "bottom up" oct-tree method. Meshes are generated using hardware commonly found in parallel compute clusters. The capability to generate very large meshes is demonstrated by the generation of off-body meshes surrounding complex aerospace geometries. Results are shown including a one billion cell mesh generated around a Predator Unmanned Aerial Vehicle geometry, which was generated on 64 processors in under 45 minutes.

  6. Novel method to construct large-scale design space in lubrication process utilizing Bayesian estimation based on a small-scale design-of-experiment and small sets of large-scale manufacturing data.

    PubMed

    Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo

    2012-12-01

    A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.

  7. Large-Scale Advanced Prop-Fan (LAP) pitch change actuator and control design report

    NASA Technical Reports Server (NTRS)

    Schwartz, R. A.; Carvalho, P.; Cutler, M. J.

    1986-01-01

    In recent years, considerable attention has been directed toward improving aircraft fuel consumption. Studies have shown that the high inherent efficiency previously demonstrated by low speed turboprop propulsion systems may now be extended to today's higher speed aircraft if advanced high-speed propeller blades having thin airfoils and aerodynamic sweep are utilized. Hamilton Standard has designed a 9-foot diameter single-rotation Large-Scale Advanced Prop-Fan (LAP) which will be tested on a static test stand, in a high speed wind tunnel and on a research aircraft. The major objective of this testing is to establish the structural integrity of large-scale Prop-Fans of advanced construction in addition to the evaluation of aerodynamic performance and aeroacoustic design. This report describes the operation, design features and actual hardware of the (LAP) Prop-Fan pitch control system. The pitch control system which controls blade angle and propeller speed consists of two separate assemblies. The first is the control unit which provides the hydraulic supply, speed governing and feather function for the system. The second unit is the hydro-mechanical pitch change actuator which directly changes blade angle (pitch) as scheduled by the control.

  8. Microbial advanced biofuels production: overcoming emulsification challenges for large-scale operation.

    PubMed

    Heeres, Arjan S; Picone, Carolina S F; van der Wielen, Luuk A M; Cunha, Rosiane L; Cuellar, Maria C

    2014-04-01

    Isoprenoids and alkanes produced and secreted by microorganisms are emerging as an alternative biofuel for diesel and jet fuel replacements. In a similar way as for other bioprocesses comprising an organic liquid phase, the presence of microorganisms, medium composition, and process conditions may result in emulsion formation during fermentation, hindering product recovery. At the same time, a low-cost production process overcoming this challenge is required to make these advanced biofuels a feasible alternative. We review the main mechanisms and causes of emulsion formation during fermentation, because a better understanding on the microscale can give insights into how to improve large-scale processes and the process technology options that can address these challenges.

  9. System design and integration of the large-scale advanced prop-fan

    NASA Technical Reports Server (NTRS)

    Huth, B. P.

    1986-01-01

    In recent years, considerable attention has been directed toward improving aircraft fuel consumption. Studies have shown that blades with thin airfoils and aerodynamic sweep extend the inherent efficiency advantage that turboprop propulsion systems have demonstrated to the higher speed to today's aircraft. Hamilton Standard has designed a 9-foot diameter single-rotation Prop-Fan. It will test the hardware on a static test stand, in low speed and high speed wind tunnels and on a research aircraft. The major objective of this testing is to establish the structural integrity of large scale Prop-Fans of advanced construction, in addition to the evaluation of aerodynamic performance and the aeroacoustic design. The coordination efforts performed to ensure smooth operation and assembly of the Prop-Fan are summarized. A summary of the loads used to size the system components, the methodology used to establish material allowables and a review of the key analytical results are given.

  10. Large-scale Advanced Prop-fan (LAP) static rotor test report

    NASA Technical Reports Server (NTRS)

    Degeorge, Charles L.; Turnberg, Jay E.; Wainauski, Harry S.

    1987-01-01

    Discussed is Static Rotor Testing of the SR-7L Large Scale Advanced Prop-Fan (LAP). The LAP is an advanced 9 foot diameter, 8 bladed propeller designed and built by Hamilton Standard under contract to the NASA Lewis Research Center. The Prop-Fan employs thin swept blades to provide efficient propulsion at flight speeds up to Mach .85. Static Testing was conducted on a 10,000 HP whirl rig at Wright Patterson Air Force Base. The test objectives were to investigate the Prop-Fan static aerodynamic and structural dynamic performance, determine the blade steady state stressers and deflections and to measure steady and unsteady pressures on the SR-7L blade surface. The measured performance of the LAP correlated well with analytical predictions at blade pitch angles below 30 deg. A stall buffet phenomenon was observed at blade pitch angles above 30 deg. This phenomenon manifested itself by elevated blade vibratory stress levels and lower than expected thrust produced and power absorbed by the Prop-Fan for a given speed and blade angle.

  11. Towards the preparative and large-scale precision manufacture of virus-like particles.

    PubMed

    Pattenden, Leonard K; Middelberg, Anton P J; Niebert, Marcus; Lipin, Daniel I

    2005-10-01

    Virus-like particles (VLPs) are of interest in vaccination, gene therapy and drug delivery, but their potential has yet to be fully realized. This is because existing laboratory processes, when scaled, do not easily give a compositionally and architecturally consistent product. Research suggests that new process routes might ultimately be based on chemical processing by self-assembly, involving the precision manufacture of precursor capsomeres followed by in vitro VLP self-assembly and scale-up to required levels. A synergistic interaction of biomolecular design and bioprocess engineering (i.e. biomolecular engineering) is required if these alternative process routes and, thus, the promise of new VLP products, are to be realized.

  12. Large-scale Advanced Prop-fan (LAP) high speed wind tunnel test report

    NASA Technical Reports Server (NTRS)

    Campbell, William A.; Wainauski, Harold S.; Arseneaux, Peter J.

    1988-01-01

    High Speed Wind Tunnel testing of the SR-7L Large Scale Advanced Prop-Fan (LAP) is reported. The LAP is a 2.74 meter (9.0 ft) diameter, 8-bladed tractor type rated for 4475 KW (6000 SHP) at 1698 rpm. It was designated and built by Hamilton Standard under contract to the NASA Lewis Research Center. The LAP employs thin swept blades to provide efficient propulsion at flight speeds up to Mach .85. Testing was conducted in the ONERA S1-MA Atmospheric Wind Tunnel in Modane, France. The test objectives were to confirm that the LAP is free from high speed classical flutter, determine the structural and aerodynamic response to angular inflow, measure blade surface pressures (static and dynamic) and evaluate the aerodynamic performance at various blade angles, rotational speeds and Mach numbers. The measured structural and aerodynamic performance of the LAP correlated well with analytical predictions thereby providing confidence in the computer prediction codes used for the design. There were no signs of classical flutter throughout all phases of the test up to and including the 0.84 maximum Mach number achieved. Steady and unsteady blade surface pressures were successfully measured for a wide range of Mach numbers, inflow angles, rotational speeds and blade angles. No barriers were discovered that would prevent proceeding with the PTA (Prop-Fan Test Assessment) Flight Test Program scheduled for early 1987.

  13. Advanced manufacturing: Technology diffusion

    SciTech Connect

    Tesar, A.

    1995-12-01

    In this paper we examine how manufacturing technology diffuses rom the developers of technology across national borders to those who do not have the capability or resources to develop advanced technology on their own. None of the wide variety of technology diffusion mechanisms discussed in this paper are new, yet the opportunities to apply these mechanisms are growing. A dramatic increase in technology diffusion occurred over the last decade. The two major trends which probably drive this increase are a worldwide inclination towards ``freer`` markets and diminishing isolation. Technology is most rapidly diffusing from the US In fact, the US is supplying technology for the rest of the world. The value of the technology supplied by the US more than doubled from 1985 to 1992 (see the Introduction for details). History shows us that technology diffusion is inevitable. It is the rates at which technologies diffuse to other countries which can vary considerably. Manufacturers in these countries are increasingly able to absorb technology. Their manufacturing efficiency is expected to progress as technology becomes increasingly available and utilized.

  14. Advanced Manufacture of Reflectors

    SciTech Connect

    Angel, Roger

    2014-12-17

    The main project objective has been to develop an advanced gravity sag method for molding large glass solar reflectors with either line or point focus, and with long or short focal length. The method involves taking standard sized squares of glass, 1.65 m x 1.65 m, and shaping them by gravity sag into precision steel molds. The method is designed for high volume manufacture when incorporated into a production line with separate pre-heating and cooling. The performance objectives for the self-supporting glass mirrors made by this project include mirror optical accuracy of 2 mrad root mean square (RMS), requiring surface slope errors less than 1 mrad rms, a target not met by current production of solar reflectors. Our objective also included development of new methods for rapidly shaping glass mirrors and coating them for higher reflectivity and soil resistance. Reflectivity of 95% for a glass mirror with anti-soil coating was targeted, compared to the present ~94% with no anti-soil coating. Our mirror cost objective is ~$20/m2 in 2020, a significant reduction compared to the present ~$35/m2 for solar trough mirrors produced for trough solar plants.

  15. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    SciTech Connect

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  16. Advanced Computing for Manufacturing.

    ERIC Educational Resources Information Center

    Erisman, Albert M.; Neves, Kenneth W.

    1987-01-01

    Discusses ways that supercomputers are being used in the manufacturing industry, including the design and production of airplanes and automobiles. Describes problems that need to be solved in the next few years for supercomputers to assume a major role in industry. (TW)

  17. Advances in Additive Manufacturing

    DTIC Science & Technology

    2016-07-14

    casting molds for traditional casting processes on the battlefield, and 3) the use of recycled polymeric materials as feedstock for 3-D printers ...nondestructive characterization technique allows for 3D imaging that readily captures defects and voids on the conditions that the attenuation, which is...of 3D -printed structures. Analysis examples will include quantification of tolerance differences between the designed and manufactured parts, void

  18. Development of Large Scale Advanced NI-CD Batteries Employing Roll-Bonded Electrodes

    DTIC Science & Technology

    1994-10-17

    LONG-TERM TESTING ..................... TASK 3 - FABRICATION OF SIX ADVANCED 2000-Ah CELLS ........................ METHOD OF...TIME, h TABLE 2-24. 2000-AH NI-CD CELL 2-41 NSWCDD/TR-94/50 METHOD OF TEST A continuous method of testing was used following the formation procedure as...MICHAEL MCKUBRE C/O SRI INTERNATIONAL 333 RAVENSWOOD AVENUE MENLO PARK CA 84025 ATTN SATYAMARAYANA KODALI TACOM WARREN MI 48397-5000 ATTN BRADFORD M

  19. An Online Scheduling Algorithm with Advance Reservation for Large-Scale Data Transfers

    SciTech Connect

    Balman, Mehmet; Kosar, Tevfik

    2010-05-20

    Scientific applications and experimental facilities generate massive data sets that need to be transferred to remote collaborating sites for sharing, processing, and long term storage. In order to support increasingly data-intensive science, next generation research networks have been deployed to provide high-speed on-demand data access between collaborating institutions. In this paper, we present a practical model for online data scheduling in which data movement operations are scheduled in advance for end-to-end high performance transfers. In our model, data scheduler interacts with reservation managers and data transfer nodes in order to reserve available bandwidth to guarantee completion of jobs that are accepted and confirmed to satisfy preferred time constraint given by the user. Our methodology improves current systems by allowing researchers and higher level meta-schedulers to use data placement as a service where theycan plan ahead and reserve the scheduler time in advance for their data movement operations. We have implemented our algorithm and examined possible techniques for incorporation into current reservation frameworks. Performance measurements confirm that the proposed algorithm is efficient and scalable.

  20. 'Oorja' in India: Assessing a large-scale commercial distribution of advanced biomass stoves to households.

    PubMed

    Thurber, Mark C; Phadke, Himani; Nagavarapu, Sriniketh; Shrimali, Gireesh; Zerriffi, Hisham

    2014-04-01

    Replacing traditional stoves with advanced alternatives that burn more cleanly has the potential to ameliorate major health problems associated with indoor air pollution in developing countries. With a few exceptions, large government and charitable programs to distribute advanced stoves have not had the desired impact. Commercially-based distributions that seek cost recovery and even profits might plausibly do better, both because they encourage distributors to supply and promote products that people want and because they are based around properly-incentivized supply chains that could more be scalable, sustainable, and replicable. The sale in India of over 400,000 "Oorja" stoves to households from 2006 onwards represents the largest commercially-based distribution of a gasification-type advanced biomass stove. BP's Emerging Consumer Markets (ECM) division and then successor company First Energy sold this stove and the pelletized biomass fuel on which it operates. We assess the success of this effort and the role its commercial aspect played in outcomes using a survey of 998 households in areas of Maharashtra and Karnataka where the stove was sold as well as detailed interviews with BP and First Energy staff. Statistical models based on this data indicate that Oorja purchase rates were significantly influenced by the intensity of Oorja marketing in a region as well as by pre-existing stove mix among households. The highest rate of adoption came from LPG-using households for which Oorja's pelletized biomass fuel reduced costs. Smoke- and health-related messages from Oorja marketing did not significantly influence the purchase decision, although they did appear to affect household perceptions about smoke. By the time of our survey, only 9% of households that purchased Oorja were still using the stove, the result in large part of difficulties First Energy encountered in developing a viable supply chain around low-cost procurement of "agricultural waste" to make

  1. Unsteady blade-surface pressures on a large-scale advanced propeller: Prediction and data

    NASA Technical Reports Server (NTRS)

    Nallasamy, M.; Groeneweg, J. F.

    1990-01-01

    An unsteady 3-D Euler analysis technique is employed to compute the flow field of an advanced propeller operating at an angle of attack. The predicted blade pressure waveforms are compared with wind tunnel data at two Mach numbers, 0.5 and 0.2. The inflow angle is three degrees. For an inflow Mach number of 0.5, the predicted pressure response is in fair agreement with data: the predicted phases of the waveforms are in close agreement with data while the magnitudes are underpredicted. At the low Mach number of 0.2 (takeoff), the numerical solution shows the formation of a leading edge vortex which is in qualitative agreement with measurements. However, the highly nonlinear pressure response measured on the blade suction surface is not captured in the present inviscid analysis.

  2. Unsteady blade surface pressures on a large-scale advanced propeller - Prediction and data

    NASA Technical Reports Server (NTRS)

    Nallasamy, M.; Groeneweg, J. F.

    1990-01-01

    An unsteady three dimensional Euler analysis technique is employed to compute the flowfield of an advanced propeller operating at an angle of attack. The predicted blade pressure waveforms are compared with wind tunnel data at two Mach numbers, 0.5 and 0.2. The inflow angle is three degrees. For an inflow Mach number of 0.5, the predicted pressure response is in fair agreement with data: the predicted phases of the waveforms are in close agreement with data while the magnitudes are underpredicted. At the low Mach number of 0.2 (take-off) the numerical solution shows the formation of a leading edge vortex which is in qualitative agreement with measurements. However, the highly nonlinear pressure response measured on the blade suction surface is not captured in the present inviscid analysis.

  3. Repeated large-scale retreat and advance of Totten Glacier indicated by inland bed erosion.

    PubMed

    Aitken, A R A; Roberts, J L; van Ommen, T D; Young, D A; Golledge, N R; Greenbaum, J S; Blankenship, D D; Siegert, M J

    2016-05-19

    Climate variations cause ice sheets to retreat and advance, raising or lowering sea level by metres to decametres. The basic relationship is unambiguous, but the timing, magnitude and sources of sea-level change remain unclear; in particular, the contribution of the East Antarctic Ice Sheet (EAIS) is ill defined, restricting our appreciation of potential future change. Several lines of evidence suggest possible collapse of the Totten Glacier into interior basins during past warm periods, most notably the Pliocene epoch, causing several metres of sea-level rise. However, the structure and long-term evolution of the ice sheet in this region have been understood insufficiently to constrain past ice-sheet extents. Here we show that deep ice-sheet erosion-enough to expose basement rocks-has occurred in two regions: the head of the Totten Glacier, within 150 kilometres of today's grounding line; and deep within the Sabrina Subglacial Basin, 350-550 kilometres from this grounding line. Our results, based on ICECAP aerogeophysical data, demarcate the marginal zones of two distinct quasi-stable EAIS configurations, corresponding to the 'modern-scale' ice sheet (with a marginal zone near the present ice-sheet margin) and the retreated ice sheet (with the marginal zone located far inland). The transitional region of 200-250 kilometres in width is less eroded, suggesting shorter-lived exposure to eroding conditions during repeated retreat-advance events, which are probably driven by ocean-forced instabilities. Representative ice-sheet models indicate that the global sea-level increase resulting from retreat in this sector can be up to 0.9 metres in the modern-scale configuration, and exceeds 2 metres in the retreated configuration.

  4. Repeated large-scale retreat and advance of Totten Glacier indicated by inland bed erosion

    NASA Astrophysics Data System (ADS)

    Aitken, A. R. A.; Roberts, J. L.; Ommen, T. D. Van; Young, D. A.; Golledge, N. R.; Greenbaum, J. S.; Blankenship, D. D.; Siegert, M. J.

    2016-05-01

    Climate variations cause ice sheets to retreat and advance, raising or lowering sea level by metres to decametres. The basic relationship is unambiguous, but the timing, magnitude and sources of sea-level change remain unclear; in particular, the contribution of the East Antarctic Ice Sheet (EAIS) is ill defined, restricting our appreciation of potential future change. Several lines of evidence suggest possible collapse of the Totten Glacier into interior basins during past warm periods, most notably the Pliocene epoch, causing several metres of sea-level rise. However, the structure and long-term evolution of the ice sheet in this region have been understood insufficiently to constrain past ice-sheet extents. Here we show that deep ice-sheet erosion—enough to expose basement rocks—has occurred in two regions: the head of the Totten Glacier, within 150 kilometres of today’s grounding line; and deep within the Sabrina Subglacial Basin, 350-550 kilometres from this grounding line. Our results, based on ICECAP aerogeophysical data, demarcate the marginal zones of two distinct quasi-stable EAIS configurations, corresponding to the ‘modern-scale’ ice sheet (with a marginal zone near the present ice-sheet margin) and the retreated ice sheet (with the marginal zone located far inland). The transitional region of 200-250 kilometres in width is less eroded, suggesting shorter-lived exposure to eroding conditions during repeated retreat-advance events, which are probably driven by ocean-forced instabilities. Representative ice-sheet models indicate that the global sea-level increase resulting from retreat in this sector can be up to 0.9 metres in the modern-scale configuration, and exceeds 2 metres in the retreated configuration.

  5. Physical control oriented model of large scale refrigerators to synthesize advanced control schemes. Design, validation, and first control results

    NASA Astrophysics Data System (ADS)

    Bonne, François; Alamir, Mazen; Bonnay, Patrick

    2014-01-01

    In this paper, a physical method to obtain control-oriented dynamical models of large scale cryogenic refrigerators is proposed, in order to synthesize model-based advanced control schemes. These schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in the cryogenic cooling systems of future fusion reactors such as the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT-60SA). Advanced control schemes lead to a better perturbation immunity and rejection, to offer a safer utilization of cryoplants. The paper gives details on how basic components used in the field of large scale helium refrigeration (especially those present on the 400W @1.8K helium test facility at CEA-Grenoble) are modeled and assembled to obtain the complete dynamic description of controllable subsystems of the refrigerator (controllable subsystems are namely the Joule-Thompson Cycle, the Brayton Cycle, the Liquid Nitrogen Precooling Unit and the Warm Compression Station). The complete 400W @1.8K (in the 400W @4.4K configuration) helium test facility model is then validated against experimental data and the optimal control of both the Joule-Thompson valve and the turbine valve is proposed, to stabilize the plant under highly variable thermals loads. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  6. Physical control oriented model of large scale refrigerators to synthesize advanced control schemes. Design, validation, and first control results

    SciTech Connect

    Bonne, François; Bonnay, Patrick

    2014-01-29

    In this paper, a physical method to obtain control-oriented dynamical models of large scale cryogenic refrigerators is proposed, in order to synthesize model-based advanced control schemes. These schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in the cryogenic cooling systems of future fusion reactors such as the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT-60SA). Advanced control schemes lead to a better perturbation immunity and rejection, to offer a safer utilization of cryoplants. The paper gives details on how basic components used in the field of large scale helium refrigeration (especially those present on the 400W @1.8K helium test facility at CEA-Grenoble) are modeled and assembled to obtain the complete dynamic description of controllable subsystems of the refrigerator (controllable subsystems are namely the Joule-Thompson Cycle, the Brayton Cycle, the Liquid Nitrogen Precooling Unit and the Warm Compression Station). The complete 400W @1.8K (in the 400W @4.4K configuration) helium test facility model is then validated against experimental data and the optimal control of both the Joule-Thompson valve and the turbine valve is proposed, to stabilize the plant under highly variable thermals loads. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  7. Advances in large-scale ocean dynamics from a decade of satellite altimetric measurement of ocean surface topography

    NASA Astrophysics Data System (ADS)

    Fu, L.; Menard, Y.

    The past decade has seen the most intensive observations of the global ocean surface topography from satellite altimeters. The Joint U.S./France TOPEX/Poseidon (T/P) Mission has become the longest radar mission ever flown in space, providing the most accurate measurements for the study of ocean dynamics since October, 1992. The European Space Agency's ERS-1 and -2 Mission also provided altimetric observations from 1991 -2000. The combined data from T/P and ERS provide a synergistic description of the global ocean variability with higher resolution and greater coverage than the individual missions. Major advances in large -scale ocean dynamics from these observations will be reviewed in the presentation, including the evolution of the El Niño Southern Oscillation cycles as well as the emerging decadal variability, the various roles of wind forcing in large -scale ocean variability, assimilation of altimeter data by ocean general circulation models, global sea level rise, internal tides and internal gravity waves

  8. Large-scale Manufacturing of Nanoparticulate-based Lubrication Additives for Improved Energy Efficiency and Reduced Emissions

    SciTech Connect

    Erdemir, Ali

    2013-09-26

    emissions was also a major reason. The transportation sector alone consumes about 13 million barrels of crude oil per day (nearly 60% of which is imported) and is responsible for about 30% of the CO{sub 2} emission. When we consider manufacturing and other energy-intensive industrial processes, the amount of petroleum being consumed due to friction and wear reaches more than 20 million barrels per day (from official energy statistics, U.S. Energy Information Administration). Frequent remanufacturing and/or replacement of worn parts due to friction-, wear-, and scuffing-related degradations also consume significant amounts of energy and give rise to additional CO{sub 2} emission. Overall, the total annual cost of friction- and wear-related energy and material losses is estimated to be rather significant (i.e., as much as 5% of the gross national products of highly industrialized nations). It is projected that more than half of the total friction- and wear-related energy losses can be recovered by developing and implementing advanced friction and wear control technologies. In transportation vehicles alone, 10% to 15% of the fuel energy is spent to overcome friction. If we can cut down the friction- and wear-related energy losses by half, then we can potentially save up to 1.5 million barrels of petroleum per day. Also, less friction and wear would mean less energy consumption as well as less carbon emissions and hazardous byproducts being generated and released to the environment. New and more robust anti-friction and -wear control technologies may thus have a significant positive impact on improving the efficiency and environmental cleanliness of the current legacy fleet and future transportation systems. Effective control of friction in other industrial sectors such as manufacturing, power generation, mining and oil exploration, and agricultural and earthmoving machinery may bring more energy savings. Therefore, this project was timely and responsive to the energy and

  9. Ohio Advanced Energy Manufacturing Center

    SciTech Connect

    Kimberly Gibson; Mark Norfolk

    2012-07-30

    The program goal of the Ohio Advanced Energy Manufacturing Center (OAEMC) is to support advanced energy manufacturing and to create responsive manufacturing clusters that will support the production of advanced energy and energy-efficient products to help ensure the nation's energy and environmental security. This goal cuts across a number of existing industry segments critical to the nation's future. Many of the advanced energy businesses are starting to make the transition from technology development to commercial production. Historically, this transition from laboratory prototypes through initial production for early adopters to full production for mass markets has taken several years. Developing and implementing manufacturing technology to enable production at a price point the market will accept is a key step. Since these start-up operations are configured to advance the technology readiness of the core energy technology, they have neither the expertise nor the resources to address manufacturing readiness issues they encounter as the technology advances toward market entry. Given the economic realities of today's business environment, finding ways to accelerate this transition can make the difference between success and failure for a new product or business. The advanced energy industry touches a wide range of industry segments that are not accustomed to working together in complex supply chains to serve large markets such as automotive and construction. During its first three years, the Center has catalyzed the communication between companies and industry groups that serve the wide range of advanced energy markets. The Center has also found areas of common concern, and worked to help companies address these concerns on a segment or industry basis rather than having each company work to solve common problems individually. EWI worked with three industries through public-private partnerships to sew together disparate segments helping to promote overall industry

  10. Advanced optical manufacturing digital integrated system

    NASA Astrophysics Data System (ADS)

    Tao, Yizheng; Li, Xinglan; Li, Wei; Tang, Dingyong

    2012-10-01

    It is necessarily to adapt development of advanced optical manufacturing technology with modern science technology development. To solved these problems which low of ration, ratio of finished product, repetition, consistent in big size and high precision in advanced optical component manufacturing. Applied business driven and method of Rational Unified Process, this paper has researched advanced optical manufacturing process flow, requirement of Advanced Optical Manufacturing integrated System, and put forward architecture and key technology of it. Designed Optical component core and Manufacturing process driven of Advanced Optical Manufacturing Digital Integrated System. the result displayed effective well, realized dynamic planning Manufacturing process, information integration improved ratio of production manufactory.

  11. Production of stable bispecific IgG1 by controlled Fab-arm exchange: scalability from bench to large-scale manufacturing by application of standard approaches.

    PubMed

    Gramer, Michael J; van den Bremer, Ewald T J; van Kampen, Muriel D; Kundu, Amitava; Kopfmann, Peter; Etter, Eric; Stinehelfer, David; Long, Justin; Lannom, Tom; Noordergraaf, Esther H; Gerritsen, Jolanda; Labrijn, Aran F; Schuurman, Janine; van Berkel, Patrick H C; Parren, Paul W H I

    2013-01-01

    The manufacturing of bispecific antibodies can be challenging for a variety of reasons. For example, protein expression problems, stability issues, or the use of non-standard approaches for manufacturing can result in poor yield or poor facility fit. In this paper, we demonstrate the use of standard antibody platforms for large-scale manufacturing of bispecific IgG1 by controlled Fab-arm exchange. Two parental antibodies that each contain a single matched point mutation in the CH3 region were separately expressed in Chinese hamster ovary cells and manufactured at 1000 L scale using a platform fed-batch and purification process that was designed for standard antibody production. The bispecific antibody was generated by mixing the two parental molecules under controlled reducing conditions, resulting in efficient Fab-arm exchange of>95% at kg scale. The reductant was removed via diafiltration, resulting in spontaneous reoxidation of interchain disulfide bonds. Aside from the bispecific nature of the molecule, extensive characterization demonstrated that the IgG1 structural integrity was maintained, including function and stability. These results demonstrate the suitability of this bispecific IgG1 format for commercial-scale manufacturing using standard antibody manufacturing techniques.

  12. Advances in a framework to compare bio-dosimetry methods for triage in large-scale radiation events

    PubMed Central

    Flood, Ann Barry; Boyle, Holly K.; Du, Gaixin; Demidenko, Eugene; Nicolalde, Roberto J.; Williams, Benjamin B.; Swartz, Harold M.

    2014-01-01

    Planning and preparation for a large-scale nuclear event would be advanced by assessing the applicability of potentially available bio-dosimetry methods. Using an updated comparative framework the performance of six bio-dosimetry methods was compared for five different population sizes (100–1 000 000) and two rates for initiating processing of the marker (15 or 15 000 people per hour) with four additional time windows. These updated factors are extrinsic to the bio-dosimetry methods themselves but have direct effects on each method's ability to begin processing individuals and the size of the population that can be accommodated. The results indicate that increased population size, along with severely compromised infrastructure, increases the time needed to triage, which decreases the usefulness of many time intensive dosimetry methods. This framework and model for evaluating bio-dosimetry provides important information for policy-makers and response planners to facilitate evaluation of each method and should advance coordination of these methods into effective triage plans. PMID:24729594

  13. New techniques in large scale metrology toolset data mining to accelerate integrated chip technology development and increase manufacturing efficiencies

    NASA Astrophysics Data System (ADS)

    Solecky, Eric; Rana, Narender; Minns, Allan; Gustafson, Carol; Lindo, Patrick; Cornell, Roger; Llanos, Paul

    2014-04-01

    Today, metrology toolsets report out more information than ever. This information applies not only to process performance but also metrology toolset and recipe performance through various diagnostic metrics. This is most evident on the Critical Dimension Scanning Electron Microscope (CD-SEM). Today state of the art CD-SEMs report out over 250 individual data points and several images per measurement. It is typical for a state of the art fab with numerous part numbers to generate at least 20TB of information over the course of a year on the CD-SEM fleet alone pushing metrology toolsets into the big data regime. Most of this comes from improvements in throughput, increased sampling and new data outputs relative to previous generations of tools. Oftentimes, these new data outputs are useful for helping to determine if the process, metrology recipe or tool is deviating from an ideal state. Many issues could be missed by singularly looking at the key process control metric like the bottom critical dimension (CD) or a small subset of this available information. By leveraging the entire data set the mean time to detect and finding the root cause of issues can be significantly reduced. In this paper a new data mining system is presented that achieves this goal. Examples are shown with a focus on the benefits realized using this new system which helps speed up development cycles of learning and reducing manufacturing cycle-time. This paper concludes discussing future directions to make this capability more effective.

  14. Advanced Technology Composite Fuselage - Manufacturing

    NASA Technical Reports Server (NTRS)

    Wilden, K. S.; Harris, C. G.; Flynn, B. W.; Gessel, M. G.; Scholz, D. B.; Stawski, S.; Winston, V.

    1997-01-01

    The goal of Boeing's Advanced Technology Composite Aircraft Structures (ATCAS) program is to develop the technology required for cost-and weight-efficient use of composite materials in transport fuselage structure. Carbon fiber reinforced epoxy was chosen for fuselage skins and stiffening elements, and for passenger and cargo floor structures. The automated fiber placement (AFP) process was selected for fabrication of stringer-stiffened and sandwich skin panels. Circumferential and window frames were braided and resin transfer molded (RTM'd). Pultrusion was selected for fabrication of floor beams and constant-section stiffening elements. Drape forming was chosen for stringers and other stiffening elements cocured to skin structures. Significant process development efforts included AFP, braiding, RTM, autoclave cure, and core blanket fabrication for both sandwich and stiffened-skin structure. Outer-mold-line and inner-mold-line tooling was developed for sandwich structures and stiffened-skin structure. The effect of design details, process control and tool design on repeatable, dimensionally stable, structure for low cost barrel assembly was assessed. Subcomponent panels representative of crown, keel, and side quadrant panels were fabricated to assess scale-up effects and manufacturing anomalies for full-scale structures. Manufacturing database including time studies, part quality, and manufacturing plans were generated to support the development of designs and analytical models to access cost, structural performance, and dimensional tolerance.

  15. Advancing Perspectives of Sustainability and Large-Scale Implementation of Design Teams in Ghana's Polytechnics: Issues and Opportunities

    ERIC Educational Resources Information Center

    Bakah, Marie Afua Baah; Voogt, Joke M.; Pieters, Jules M.

    2012-01-01

    Polytechnic staff perspectives are sought on the sustainability and large-scale implementation of design teams (DT), as a means for collaborative curriculum design and teacher professional development in Ghana's polytechnics, months after implementation. Data indicates that teachers still collaborate in DTs for curriculum design and professional…

  16. Advance Manufacturing Office FY 2017 Budget At-A-Glance

    SciTech Connect

    2016-03-01

    The Advanced Manufacturing Office (AMO) brings together manufacturers, research institutions, suppliers, and universities to investigate manufacturing processes, information, and materials technologies critical to advance domestic manufacturing of clean energy products, and to support energy productivity across the entire manufacturing sector.

  17. Manufacturing development of DC-10 advanced rudder

    NASA Technical Reports Server (NTRS)

    Cominsky, A.

    1979-01-01

    The design, manufacture, and ground test activities during development of production methods for an advanced composite rudder for the DC-10 transport aircraft are described. The advanced composite aft rudder is satisfactory for airline service and a cost saving in a full production manufacturing mode is anticipated.

  18. Performance of powder-filled evacuated panel insulation in a manufactured home roof cavity: Tests in the Large Scale Climate Simulator

    SciTech Connect

    Petrie, T.W.; Kosny, J.; Childs, P.W.

    1996-03-01

    A full-scale section of half the top of a single-wide manufactured home has been studied in the Large Scale Climate Simulator (LSCS) at the Oak Ridge National Laboratory. A small roof cavity with little room for insulation at the eaves is often the case with single-wide units and limits practical ways to improve thermal performance. The purpose of the current tests was to obtain steady-state performance data for the roof cavity of the manufactured home test section when the roof cavity was insulated with fiberglass batts, blown-in rock wool insulation or combinations of these insulations and powder-filled evacuated panel (PEP) insulation. Four insulation configurations were tested: (A) a configuration with two layers of nominal R{sub US}-7 h {center_dot} ft{sup 2} {center_dot} F/BTU (R{sub SI}-1.2 m{sup 2} {center_dot} K/W) fiberglass batts; (B) a layer of PEPs and one layer of the fiberglass batts; (C) four layers of the fiberglass batts; and (D) an average 4.1 in. (10.4 cm) thick layer of blown-in rock wool at an average density of 2.4 lb/ft{sup 3} (38 kg/m{sup 3}). Effects of additional sheathing were determined for Configurations B and C. With Configuration D over the ceiling, two layers of expanded polystyrene (EPS) boards, each about the same thickness as the PEPs, were installed over the trusses instead of the roof. Aluminum foils facing the attic and over the top layer of EPS were added. The top layer of EPS was then replaced by PEPs.

  19. Emerging Global Trends in Advanced Manufacturing

    DTIC Science & Technology

    2012-03-01

    Analyses for ODNI,” for the Office of the Director of National Intelligence (ODNI). The views, opinions, and findings should not be construed as...changes in advanced manufacturing, the National Intelligence Manager for Science and Technology in the Office of the Director of National... Intelligence asked the Institute for Defense Analyses to identify emerging global trends in advanced manufacturing and to propose scenarios for advanced

  20. Isotope separation and advanced manufacturing technology

    NASA Astrophysics Data System (ADS)

    Carpenter, J.; Kan, T.

    This is the fourth issue of a semiannual report for the Isotope Separation and Advanced Materials Manufacturing (ISAM) Technology Program at Lawrence Livermore National Laboratory. Primary objectives include: (1) the Uranium Atomic Vapor Laser Isotope Separation (UAVLIS) process, which is being developed and prepared for deployment as an advanced uranium enrichment capability; (2) Advanced manufacturing technologies, which include industrial laser and E-beam material processing and new manufacturing technologies for uranium, plutonium, and other strategically important materials in support of DOE and other national applications. This report features progress in the ISAM Program from October 1993 through March 1994.

  1. EFG Technology and Diagnostic R&D for Large-Scale PV Manufacturing; Final Subcontract Report, 1 March 2002 - 31 March 2005

    SciTech Connect

    Kalejs, J.; Aurora, P.; Bathey, B.; Cao, J.; Doedderlein, J.; Gonsiorawski, R.; Heath, B.; Kubasti, J.; Mackintosh, B.; Ouellette, M.; Rosenblum, M.; Southimath, S.; Xavier, G.

    2005-10-01

    The objective of this subcontract was to carry out R&D to advance the technology, processes, and performance of RWE Schott-Solar's wafer, cell, and module manufacturing lines, and help configure these lines for scaling up of edge-defined, film-fed growth (EFG) ribbon technology to the 50-100 MW PV factory level. EFG ribbon manufacturing continued to expand during this subcontract period and now has reached a capacity of 40 MW. EFG wafer products were diversified over this time period. In addition to 10 cm x 10 cm and 10 cm x 15 cm wafer areas, which were the standard products at the beginning of this program, R&D has focused on new EFG technology to extend production to 12.5 cm x 12.5 cm EFG wafers. Cell and module production also has continued to expand in Billerica. A new 12-MW cell line was installed and brought on line in 2003. R&D on this subcontract improved cell yield and throughput, and optimized the cell performance, with special emphasis on work to speed up wafer transfer, hence enhancing throughput. Improvements of wafer transfer processes during this program have raised cell line capacity from 12 MW to over 18 MW. Optimization of module manufacturing processes was carried out on new equipment installed during a manufacturing upgrade in Billerica to a 12-MW capacity to improve yield and reliability of products.

  2. 2001 Industry Studies: Advanced Manufacturing

    DTIC Science & Technology

    2007-11-02

    oriented, 19 and manufacturers are employing the Internet and associated information technologies to better integrate supply chains and form extended...producers, and suppliers. Focus will shift from within the enterprise to the entire market, with business- to-business ( B2B ) e-commerce becoming a...sufficiently disruptive (e.g. the Internet ), alter the course of industries and the broader economy. The transformation of manufacturing has involved

  3. Large-Scale Water-Vapor Two-Phase Flow Simulations in Advanced Light Water Reactor Cores

    SciTech Connect

    Hiroyuki, Yoshida; Kazuyuki, Takase; Hidesada, Tamai; Hajime, Akimoto; Yasuo, Ose

    2004-07-01

    Fluid flow characteristics in a fuel bundle of a reduced-moderation light water reactor (RMWR) with a tight-lattice core were analyzed numerically using a newly developed two-phase flow analysis code under the full bundle size condition. Conventional analysis methods such as subchannel codes need composition equations based on the experimental data. In case that there are no experimental data regarding to the thermal-hydraulics in the tight-lattice core, therefore, it is difficult to obtain high prediction accuracy on the thermal design of the RMWR. Then the large-scale direct numerical simulations with a super computer were chosen. The axial velocity distribution in a fuel bundle changed sharply around a spacer. Momentum transfer of vapor in a tight-lattice core is linear along the flow direction. The interface characteristics between water and vapor were clarified quantitatively. (authors)

  4. Final Report on DOE Project entitled Dynamic Optimized Advanced Scheduling of Bandwidth Demands for Large-Scale Science Applications

    SciTech Connect

    Ramamurthy, Byravamurthy

    2014-05-05

    In this project, developed scheduling frameworks for dynamic bandwidth demands for large-scale science applications. In particular, we developed scheduling algorithms for dynamic bandwidth demands in this project. Apart from theoretical approaches such as Integer Linear Programming, Tabu Search and Genetic Algorithm heuristics, we have utilized practical data from ESnet OSCARS project (from our DOE lab partners) to conduct realistic simulations of our approaches. We have disseminated our work through conference paper presentations and journal papers and a book chapter. In this project we addressed the problem of scheduling of lightpaths over optical wavelength division multiplexed (WDM) networks. We published several conference papers and journal papers on this topic. We also addressed the problems of joint allocation of computing, storage and networking resources in Grid/Cloud networks and proposed energy-efficient mechanisms for operatin optical WDM networks.

  5. Advanced manufacturing: Technology and international competitiveness

    SciTech Connect

    Tesar, A.

    1995-02-01

    Dramatic changes in the competitiveness of German and Japanese manufacturing have been most evident since 1988. All three countries are now facing similar challenges, and these challenges are clearly observed in human capital issues. Our comparison of human capital issues in German, Japanese, and US manufacturing leads us to the following key judgments: Manufacturing workforces are undergoing significant changes due to advanced manufacturing technologies. As companies are forced to develop and apply these technologies, the constituency of the manufacturing workforce (especially educational requirements, contingent labor, job content, and continuing knowledge development) is being dramatically and irreversibly altered. The new workforce requirements which result due to advanced manufacturing require a higher level of worker sophistication and responsibility.

  6. The Advanced Manufacturing Laboratory at RPI.

    ERIC Educational Resources Information Center

    Desrochers, A.; DeRusso, P. M.

    1984-01-01

    An Advanced Manufacturing Laboratory (AML) has been established at Rensselaer Polytechnic Institute (RPI). AML courses, course objectives, instructional strategies, student experiences in design and manufacturing, and AML equipment are discussed. Overall recommendations based on student and instructor experiences are also presented. (JN)

  7. Advanced Manufacturing Training: Mobile Learning Labs

    ERIC Educational Resources Information Center

    Vukich, John C.; Ackerman, Amanda A.

    2010-01-01

    Across Colorado, manufacturing employers forecast an on-going need not only for workers who are interested in career opportunities but who are prepared to enter the advanced manufacturing industry with the necessary high-tech skills. Additionally, employers report concerns about replacing retiring workers that take with them decades of…

  8. Advancing Manufacturing Research Through Competitions

    SciTech Connect

    Balakirsky, Stephen; Madhavan, Raj

    2009-01-01

    Competitions provide a technique for building interest and collaboration in targeted research areas. This paper will present a new competition that aims to increase collaboration amongst Universities, automation end-users, and automation manufacturers through a virtual competition. The virtual nature of the competition allows for reduced infrastructure requirements while maintaining realism in both the robotic equipment deployed and the scenarios. Details of the virtual environment as well as the competitions objectives, rules, and scoring metrics will be presented.

  9. Integration of Technology, Curriculum, and Professional Development for Advancing Middle School Mathematics: Three Large-Scale Studies

    ERIC Educational Resources Information Center

    Roschelle, Jeremy; Shechtman, Nicole; Tatar, Deborah; Hegedus, Stephen; Hopkins, Bill; Empson, Susan; Knudsen, Jennifer; Gallagher, Lawrence P.

    2010-01-01

    The authors present three studies (two randomized controlled experiments and one embedded quasi-experiment) designed to evaluate the impact of replacement units targeting student learning of advanced middle school mathematics. The studies evaluated the SimCalc approach, which integrates an interactive representational technology, paper curriculum,…

  10. Large-Scale PV Module Manufacturing Using Ultra-Thin Polycrystalline Silicon Solar Cells: Final Subcontract Report, 1 April 2002--28 February 2006

    SciTech Connect

    Wohlgemuth, J.; Narayanan, M.

    2006-07-01

    The major objectives of this program were to continue advances of BP Solar polycrystalline silicon manufacturing technology. The Program included work in the following areas. (1) Efforts in the casting area to increase ingot size, improve ingot material quality, and improve handling of silicon feedstock as it is loaded into the casting stations. (2) Developing wire saws to slice 100-..mu..m-thick silicon wafers on 290-..mu..m-centers. (3) Developing equipment for demounting and subsequent handling of very thin silicon wafers. (4) Developing cell processes using 100-..mu..m-thick silicon wafers that produce encapsulated cells with efficiencies of at least 15.4% at an overall yield exceeding 95%. (5) Expanding existing in-line manufacturing data reporting systems to provide active process control. (6) Establishing a 50-MW (annual nominal capacity) green-field Mega-plant factory model template based on this new thin polycrystalline silicon technology. (7) Facilitating an increase in the silicon feedstock industry's production capacity for lower-cost solar-grade silicon feedstock..

  11. Observations on computational methodologies for use in large-scale, gradient-based, multidisciplinary design incorporating advanced CFD codes

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Hou, G. J.-W.; Jones, H. E.; Taylor, A. C., III; Korivi, V. M.

    1992-01-01

    How a combination of various computational methodologies could reduce the enormous computational costs envisioned in using advanced CFD codes in gradient based optimized multidisciplinary design (MdD) procedures is briefly outlined. Implications of these MdD requirements upon advanced CFD codes are somewhat different than those imposed by a single discipline design. A means for satisfying these MdD requirements for gradient information is presented which appear to permit: (1) some leeway in the CFD solution algorithms which can be used; (2) an extension to 3-D problems; and (3) straightforward use of other computational methodologies. Many of these observations have previously been discussed as possibilities for doing parts of the problem more efficiently; the contribution here is observing how they fit together in a mutually beneficial way.

  12. Advanced Manufacturing of Superconducting Magnets

    NASA Technical Reports Server (NTRS)

    Senti, Mark W.

    1996-01-01

    The development of specialized materials, processes, and robotics technology allows for the rapid prototype and manufacture of superconducting and normal magnets which can be used for magnetic suspension applications. Presented are highlights of the Direct Conductor Placement System (DCPS) which enables automatic design and assembly of 3-dimensional coils and conductor patterns using LTS and HTS conductors. The system enables engineers to place conductors in complex patterns with greater efficiency and accuracy, and without the need for hard tooling. It may also allow researchers to create new types of coils and patterns which were never practical before the development of DCPS. The DCPS includes a custom designed eight-axis robot, patented end effector, CoilCAD(trademark) design software, RoboWire(trademark) control software, and automatic inspection.

  13. Cruise noise of the 2/9 scale model of the Large-scale Advanced Propfan (LAP) propeller, SR-7A

    NASA Astrophysics Data System (ADS)

    Dittmar, James H.; Stang, David B.

    1987-10-01

    Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken in the NASA Lewis Research Center 8 x 6 foot Wind Tunnel. The maximum blade passing tone noise first rises with increasing helical tip Mach number to a peak level, then remains the same or decreases from its peak level when going to higher helical tip Mach numbers. This trend was observed for operation at both constant advance ratio and approximately equal thrust. This noise reduction or, leveling out at high helical tip Mach numbers, points to the use of higher propeller tip speeds as a possible method to limit airplane cabin noise while maintaining high flight speed and efficiency. Projections of the tunnel model data are made to the full scale LAP propeller mounted on the test bed aircraft and compared with predictions. The prediction method is found to be somewhat conservative in that it slightly overpredicts the projected model data at the peak.

  14. Cruise noise of the 2/9 scale model of the Large-scale Advanced Propfan (LAP) propeller, SR-7A

    NASA Technical Reports Server (NTRS)

    Dittmar, James H.; Stang, David B.

    1987-01-01

    Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken in the NASA Lewis Research Center 8 x 6 foot Wind Tunnel. The maximum blade passing tone noise first rises with increasing helical tip Mach number to a peak level, then remains the same or decreases from its peak level when going to higher helical tip Mach numbers. This trend was observed for operation at both constant advance ratio and approximately equal thrust. This noise reduction or, leveling out at high helical tip Mach numbers, points to the use of higher propeller tip speeds as a possible method to limit airplane cabin noise while maintaining high flight speed and efficiency. Projections of the tunnel model data are made to the full scale LAP propeller mounted on the test bed aircraft and compared with predictions. The prediction method is found to be somewhat conservative in that it slightly overpredicts the projected model data at the peak.

  15. Cruise noise of the 2/9th scale model of the Large-scale Advanced Propfan (LAP) propeller, SR-7A

    NASA Technical Reports Server (NTRS)

    Dittmar, James H.; Stang, David B.

    1987-01-01

    Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken in the NASA Lewis Research Center 8 x 6 foot Wind Tunnel. The maximum blade passing tone noise first rises with increasing helical tip Mach number to a peak level, then remains the same or decreases from its peak level when going to higher helical tip Mach numbers. This trend was observed for operation at both constant advance ratio and approximately equal thrust. This noise reduction or, leveling out at high helical tip Mach numbers, points to the use of higher propeller tip speeds as a possible method to limit airplane cabin noise while maintaining high flight speed and efficiency. Projections of the tunnel model data are made to the full scale LAP propeller mounted on the test bed aircraft and compared with predictions. The prediction method is found to be somewhat conservative in that it slightly overpredicts the projected model data at the peak.

  16. Cruise noise of the 2/9th scale model of the Large-scale Advanced Propfan (LAP) propeller, SR-7A

    NASA Astrophysics Data System (ADS)

    Dittmar, James H.; Stang, David B.

    1987-09-01

    Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken in the NASA Lewis Research Center 8 x 6 foot Wind Tunnel. The maximum blade passing tone noise first rises with increasing helical tip Mach number to a peak level, then remains the same or decreases from its peak level when going to higher helical tip Mach numbers. This trend was observed for operation at both constant advance ratio and approximately equal thrust. This noise reduction or, leveling out at high helical tip Mach numbers, points to the use of higher propeller tip speeds as a possible method to limit airplane cabin noise while maintaining high flight speed and efficiency. Projections of the tunnel model data are made to the full scale LAP propeller mounted on the test bed aircraft and compared with predictions. The prediction method is found to be somewhat conservative in that it slightly overpredicts the projected model data at the peak.

  17. Stitching Techniques Advance Optics Manufacturing

    NASA Technical Reports Server (NTRS)

    2010-01-01

    Because NASA depends on the fabrication and testing of large, high-quality aspheric (nonspherical) optics for applications like the James Webb Space Telescope, it sought an improved method for measuring large aspheres. Through Small Business Innovation Research (SBIR) awards from Goddard Space Flight Center, QED Technologies, of Rochester, New York, upgraded and enhanced its stitching technology for aspheres. QED developed the SSI-A, which earned the company an R&D 100 award, and also developed a breakthrough machine tool called the aspheric stitching interferometer. The equipment is applied to advanced optics in telescopes, microscopes, cameras, medical scopes, binoculars, and photolithography."

  18. ‘Oorja’ in India: Assessing a large-scale commercial distribution of advanced biomass stoves to households

    PubMed Central

    Thurber, Mark C.; Phadke, Himani; Nagavarapu, Sriniketh; Shrimali, Gireesh; Zerriffi, Hisham

    2015-01-01

    Replacing traditional stoves with advanced alternatives that burn more cleanly has the potential to ameliorate major health problems associated with indoor air pollution in developing countries. With a few exceptions, large government and charitable programs to distribute advanced stoves have not had the desired impact. Commercially-based distributions that seek cost recovery and even profits might plausibly do better, both because they encourage distributors to supply and promote products that people want and because they are based around properly-incentivized supply chains that could more be scalable, sustainable, and replicable. The sale in India of over 400,000 “Oorja” stoves to households from 2006 onwards represents the largest commercially-based distribution of a gasification-type advanced biomass stove. BP's Emerging Consumer Markets (ECM) division and then successor company First Energy sold this stove and the pelletized biomass fuel on which it operates. We assess the success of this effort and the role its commercial aspect played in outcomes using a survey of 998 households in areas of Maharashtra and Karnataka where the stove was sold as well as detailed interviews with BP and First Energy staff. Statistical models based on this data indicate that Oorja purchase rates were significantly influenced by the intensity of Oorja marketing in a region as well as by pre-existing stove mix among households. The highest rate of adoption came from LPG-using households for which Oorja's pelletized biomass fuel reduced costs. Smoke- and health-related messages from Oorja marketing did not significantly influence the purchase decision, although they did appear to affect household perceptions about smoke. By the time of our survey, only 9% of households that purchased Oorja were still using the stove, the result in large part of difficulties First Energy encountered in developing a viable supply chain around low-cost procurement of “agricultural waste” to

  19. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  20. 2D materials advances: from large scale synthesis and controlled heterostructures to improved characterization techniques, defects and applications

    NASA Astrophysics Data System (ADS)

    Lin, Zhong; McCreary, Amber; Briggs, Natalie; Subramanian, Shruti; Zhang, Kehao; Sun, Yifan; Li, Xufan; Borys, Nicholas J.; Yuan, Hongtao; Fullerton-Shirey, Susan K.; Chernikov, Alexey; Zhao, Hui; McDonnell, Stephen; Lindenberg, Aaron M.; Xiao, Kai; LeRoy, Brian J.; Drndić, Marija; Hwang, James C. M.; Park, Jiwoong; Chhowalla, Manish; Schaak, Raymond E.; Javey, Ali; Hersam, Mark C.; Robinson, Joshua; Terrones, Mauricio

    2016-12-01

    The rise of two-dimensional (2D) materials research took place following the isolation of graphene in 2004. These new 2D materials include transition metal dichalcogenides, mono-elemental 2D sheets, and several carbide- and nitride-based materials. The number of publications related to these emerging materials has been drastically increasing over the last five years. Thus, through this comprehensive review, we aim to discuss the most recent groundbreaking discoveries as well as emerging opportunities and remaining challenges. This review starts out by delving into the improved methods of producing these new 2D materials via controlled exfoliation, metal organic chemical vapor deposition, and wet chemical means. We look into recent studies of doping as well as the optical properties of 2D materials and their heterostructures. Recent advances towards applications of these materials in 2D electronics are also reviewed, and include the tunnel MOSFET and ways to reduce the contact resistance for fabricating high-quality devices. Finally, several unique and innovative applications recently explored are discussed as well as perspectives of this exciting and fast moving field.

  1. Advances in recombinant antibody manufacturing.

    PubMed

    Kunert, Renate; Reinhart, David

    2016-04-01

    Since the first use of Chinese hamster ovary (CHO) cells for recombinant protein expression, production processes have steadily improved through numerous advances. In this review, we have highlighted several key milestones that have contributed to the success of CHO cells from the beginning of their use for monoclonal antibody (mAb) expression until today. The main factors influencing the yield of a production process are the time to accumulate a desired amount of biomass, the process duration, and the specific productivity. By comparing maximum cell densities and specific growth rates of various expression systems, we have emphasized the limiting parameters of different cellular systems and comprehensively described scientific approaches and techniques to improve host cell lines. Besides the quantitative evaluation of current systems, the quality-determining properties of a host cell line, namely post-translational modifications, were analyzed and compared to naturally occurring polyclonal immunoglobulin fractions from human plasma. In summary, numerous different expression systems for mAbs are available and also under scientific investigation. However, CHO cells are the most frequently investigated cell lines and remain the workhorse for mAb production until today.

  2. National Center for Advanced Manufacturing Overview

    NASA Technical Reports Server (NTRS)

    Vickers, John H.

    2000-01-01

    This paper presents a general overview of the National Center for Advanced Manufacturing, with an emphasis on Aerospace Materials, Processes and Environmental Technology. The topics include: 1) Background; 2) Mission; 3) Technology Development Approach; 4) Space Transportation Significance; 5) Partnering; 6) NCAM MAF Project; 7) NASA & Calhoun Community College; 8) Educational Development; and 9) Intelligent Synthesis Environment. This paper is presented in viewgraph form.

  3. Development of Advanced Ceramic Manufacturing Technology

    SciTech Connect

    Pujari, V.K.

    2001-04-05

    Advanced structural ceramics are enabling materials for new transportation engine systems that have the potential for significantly reducing energy consumption and pollution in automobiles and heavy vehicles. Ceramic component reliability and performance have been demonstrated in previous U.S. DOE initiatives, but high manufacturing cost was recognized as a major barrier to commercialization. Norton Advanced Ceramics (NAC), a division of Saint-Gobain Industrial Ceramics, Inc. (SGIC), was selected to perform a major Advanced Ceramics Manufacturing Technology (ACMT) Program. The overall objectives of NAC's program were to design, develop, and demonstrate advanced manufacturing technology for the production of ceramic exhaust valves for diesel engines. The specific objectives were (1) to reduce the manufacturing cost by an order of magnitude, (2) to develop and demonstrate process capability and reproducibility, and (3) to validate ceramic valve performance, durability, and reliability. The program was divided into four major tasks: Component Design and Specification, Component Manufacturing Technology Development, Inspection and Testing, and Process Demonstration. A high-power diesel engine valve for the DDC Series 149 engine was chosen as the demonstration part for this program. This was determined to be an ideal component type to demonstrate cost-effective process enhancements, the beneficial impact of advanced ceramics on transportation systems, and near-term commercialization potential. The baseline valve material was NAC's NT451 SiAION. It was replaced, later in the program, by an alternate silicon nitride composition (NT551), which utilized a lower cost raw material and a simplified powder-processing approach. The material specifications were defined based on DDC's engine requirements, and the initial and final component design tasks were completed.

  4. Preliminary measurement of the noise from the 2/9 scale model of the Large-scale Advanced Propfan (LAP) propeller, SR-7A

    NASA Technical Reports Server (NTRS)

    Dittmar, J. H.

    1985-01-01

    Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken into the NASA Lewis 8- by 6-Foot Wind Tunnel. The maximum blade passing tone decreases from the peak level when going to higher helical tip Mach numbers. This noise reduction points to the use of higher propeller speeds as a possible method to reduce airplane cabin noise while maintaining high flight speed and efficiency. Comparison of the SR-7A blade passing noise with the noise of the similarly designed SR-3 propeller shows good agreement as expected. The SR-7A propeller is slightly noisier than the SR-3 model in the plane of rotation at the cruise condition. Projections of the tunnel model data are made to the full-scale LAP propeller mounted on the test bed aircraft and compared with design predictions. The prediction method is conservative in the sense that it overpredicts the projected model data.

  5. Preliminary measurement of the noise from the 2/9 scale model of the Large-scale Advanced Propfan (LAP) propeller, SR-7A

    NASA Astrophysics Data System (ADS)

    Dittmar, J. H.

    1985-09-01

    Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken into the NASA Lewis 8- by 6-Foot Wind Tunnel. The maximum blade passing tone decreases from the peak level when going to higher helical tip Mach numbers. This noise reduction points to the use of higher propeller speeds as a possible method to reduce airplane cabin noise while maintaining high flight speed and efficiency. Comparison of the SR-7A blade passing noise with the noise of the similarly designed SR-3 propeller shows good agreement as expected. The SR-7A propeller is slightly noisier than the SR-3 model in the plane of rotation at the cruise condition. Projections of the tunnel model data are made to the full-scale LAP propeller mounted on the test bed aircraft and compared with design predictions. The prediction method is conservative in the sense that it overpredicts the projected model data.

  6. Advanced Engineering Environments: Implications for Aerospace Manufacturing

    NASA Technical Reports Server (NTRS)

    Thomas, D.

    2001-01-01

    There are significant challenges facing today's aerospace industry. Global competition, more complex products, geographically-distributed design teams, demands for lower cost, higher reliability and safer vehicles, and the need to incorporate the latest technologies quicker all face the developer of aerospace systems. New information technologies offer promising opportunities to develop advanced engineering environments (AEEs) to meet these challenges. Significant advances in the state-of-the-art of aerospace engineering practice are envisioned in the areas of engineering design and analytical tools, cost and risk tools, collaborative engineering, and high-fidelity simulations early in the development cycle. These advances will enable modeling and simulation of manufacturing methods, which will in turn allow manufacturing considerations to be included much earlier in the system development cycle. Significant cost savings, increased quality, and decreased manufacturing cycle time are expected to result. This paper will give an overview of the NASA's Intelligent Synthesis Environment, the agency initiative to develop an AEE, with a focus on the anticipated benefits in aerospace manufacturing.

  7. Recent manufacturing advances for spiral bevel gears

    NASA Technical Reports Server (NTRS)

    Handschuh, Robert F.; Bill, Robert C.

    1991-01-01

    The U.S. Army Aviation Systems Command (AVSCOM), through the Propulsion Directorate at NASA Lewis Research Center, has recently sponsored projects to advance the manufacturing process for spiral bevel gears. This type of gear is a critical component in rotary-wing propulsion systems. Two successfully completed contracted projects are described. The first project addresses the automated inspection of spiral bevel gears through the use of coordinate measuring machines. The second project entails the computer-numerical-control (CNC) conversion of a spiral bevel gear grinding machine that is used for all aerospace spiral bevel gears. The results of these projects are described with regard to the savings effected in manufacturing time.

  8. Recent manufacturing advances for spiral bevel gears

    NASA Technical Reports Server (NTRS)

    Handschuh, Robert F.; Bill, Robert C.

    1991-01-01

    The U.S. Army Aviation Systems Command (AVSCOM), through the Propulsion Directorate at NASA LRC, has recently sponsored projects to advance the manufacturing process for spiral bevel gears. This type of gear is a critical component in rotary-wing propulsion systems. Two successfully completed contracted projects are described. The first project addresses the automated inspection of spiral bevel gears through the use of coordinate measuring machines. The second project entails the computer-numerical-control (CNC) conversion of a spiral bevel gear grinding machine that is used for all aerospace spiral bevel gears. The results of these projects are described with regard to the savings effected in manufacturing time.

  9. Large-Scale Disasters

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  10. NASA's National Center for Advanced Manufacturing

    NASA Technical Reports Server (NTRS)

    Vickers, John

    2003-01-01

    NASA has designated the Principal Center Assignment to the Marshall Space Flight Center (MSFC) for implementation of the National Center for Advanced Manufacturing (NCAM). NCAM is NASA s leading resource for the aerospace manufacturing research, development, and innovation needs that are critical to the goals of the Agency. Through this initiative NCAM s people work together with government, industry, and academia to ensure the technology base and national infrastructure are available to develop innovative manufacturing technologies with broad application to NASA Enterprise programs, and U.S. industry. Educational enhancements are ever-present within the NCAM focus to promote research, to inspire participation and to support education and training in manufacturing. Many important accomplishments took place during 2002. Through NCAM, NASA was among five federal agencies involved in manufacturing research and development (R&D) to launch a major effort to exchange information and cooperate directly to enhance the payoffs from federal investments. The Government Agencies Technology Exchange in Manufacturing (GATE-M) is the only active effort to specifically and comprehensively address manufacturing R&D across the federal government. Participating agencies include the departments of Commerce (represented by the National Institute of Standards and Technology), Defense, and Energy, as well as the National Science Foundation and NASA. MSFC s ongoing partnership with the State of Louisiana, the University of New Orleans, and Lockheed Martin Corporation at the Michoud Assembly Facility (MAF) progressed significantly. Major capital investments were initiated for world-class equipment additions including a universal friction stir welding system, composite fiber placement machine, five-axis machining center, and ten-axis laser ultrasonic nondestructive test system. The NCAM consortium of five universities led by University of New Orleans with Mississippi State University

  11. Advanced Algorithms and High-Performance Testbed for Large-Scale Site Characterization and Subsurface Target Detecting Using Airborne Ground Penetrating SAR

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Collier, James B.; Citak, Ari

    1997-01-01

    A team of US Army Corps of Engineers, Omaha District and Engineering and Support Center, Huntsville, let Propulsion Laboratory (JPL), Stanford Research Institute (SRI), and Montgomery Watson is currently in the process of planning and conducting the largest ever survey at the Former Buckley Field (60,000 acres), in Colorado, by using SRI airborne, ground penetrating, Synthetic Aperture Radar (SAR). The purpose of this survey is the detection of surface and subsurface Unexploded Ordnance (UXO) and in a broader sense the site characterization for identification of contaminated as well as clear areas. In preparation for such a large-scale survey, JPL has been developing advanced algorithms and a high-performance restbed for processing of massive amount of expected SAR data from this site. Two key requirements of this project are the accuracy (in terms of UXO detection) and speed of SAR data processing. The first key feature of this testbed is a large degree of automation and a minimum degree of the need for human perception in the processing to achieve an acceptable processing rate of several hundred acres per day. For accurate UXO detection, novel algorithms have been developed and implemented. These algorithms analyze dual polarized (HH and VV) SAR data. They are based on the correlation of HH and VV SAR data and involve a rather large set of parameters for accurate detection of UXO. For each specific site, this set of parameters can be optimized by using ground truth data (i.e., known surface and subsurface UXOs). In this paper, we discuss these algorithms and their successful application for detection of surface and subsurface anti-tank mines by using a data set from Yuma proving Ground, A7, acquired by SRI SAR.

  12. Large Scale Metal Additive Techniques Review

    SciTech Connect

    Nycz, Andrzej; Adediran, Adeola I; Noakes, Mark W; Love, Lonnie J

    2016-01-01

    In recent years additive manufacturing made long strides toward becoming a main stream production technology. Particularly strong progress has been made in large-scale polymer deposition. However, large scale metal additive has not yet reached parity with large scale polymer. This paper is a review study of the metal additive techniques in the context of building large structures. Current commercial devices are capable of printing metal parts on the order of several cubic feet compared to hundreds of cubic feet for the polymer side. In order to follow the polymer progress path several factors are considered: potential to scale, economy, environment friendliness, material properties, feedstock availability, robustness of the process, quality and accuracy, potential for defects, and post processing as well as potential applications. This paper focuses on current state of art of large scale metal additive technology with a focus on expanding the geometric limits.

  13. Large scale traffic simulations

    SciTech Connect

    Nagel, K.; Barrett, C.L. |; Rickert, M. |

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  14. Large scale tracking algorithms

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  15. Precision Casting via Advanced Simulation and Manufacturing

    NASA Technical Reports Server (NTRS)

    1997-01-01

    A two-year program was conducted to develop and commercially implement selected casting manufacturing technologies to enable significant reductions in the costs of castings, increase the complexity and dimensional accuracy of castings, and reduce the development times for delivery of high quality castings. The industry-led R&D project was cost shared with NASA's Aerospace Industry Technology Program (AITP). The Rocketdyne Division of Boeing North American, Inc. served as the team lead with participation from Lockheed Martin, Ford Motor Company, Howmet Corporation, PCC Airfoils, General Electric, UES, Inc., University of Alabama, Auburn University, Robinson, Inc., Aracor, and NASA-LeRC. The technical effort was organized into four distinct tasks. The accomplishments reported herein. Task 1.0 developed advanced simulation technology for core molding. Ford headed up this task. On this program, a specialized core machine was designed and built. Task 2.0 focused on intelligent process control for precision core molding. Howmet led this effort. The primary focus of these experimental efforts was to characterize the process parameters that have a strong impact on dimensional control issues of injection molded cores during their fabrication. Task 3.0 developed and applied rapid prototyping to produce near net shape castings. Rocketdyne was responsible for this task. CAD files were generated using reverse engineering, rapid prototype patterns were fabricated using SLS and SLA, and castings produced and evaluated. Task 4.0 was aimed at developing technology transfer. Rocketdyne coordinated this task. Casting related technology, explored and evaluated in the first three tasks of this program, was implemented into manufacturing processes.

  16. The Role of Advanced Manufacturing in Our Journey to Mars

    NASA Technical Reports Server (NTRS)

    Keys, Andrew S.

    2017-01-01

    The National Additive Manufacturing Innovation Institute was launched in August 2012 as a result of President Obama's proposed need for a whole-of-government advanced manufacturing effort. Mission: To accelerate the adoption of additive manufacturing technologies to increase domestic manufacturing competitiveness. Funding: Five federal agencies - the Departments of Defense, Energy, and Commerce, the National Science Foundation, and NASA - jointly committed to invest $45 million.

  17. Measurement of Unsteady Blade Surface Pressure on a Single Rotation Large Scale Advanced Prop-fan with Angular and Wake Inflow at Mach Numbers from 0.02 to 0.70

    NASA Technical Reports Server (NTRS)

    Bushnell, P.; Gruber, M.; Parzych, D.

    1988-01-01

    Unsteady blade surface pressure data for the Large-Scale Advanced Prop-Fan (LAP) blade operation with angular inflow, wake inflow and uniform flow over a range of inflow Mach numbers of 0.02 to 0.70 is provided. The data are presented as Fourier coefficients for the first 35 harmonics of shaft rotational frequency. Also presented is a brief discussion of the unsteady blade response observed at takeoff and cruise conditions with angular and wake inflow.

  18. Organizational Considerations for Advanced Manufacturing Technology

    ERIC Educational Resources Information Center

    DeRuntz, Bruce D.; Turner, Roger M.

    2003-01-01

    In the last several decades, the United States has experienced a decline in productivity, while the world has seen a maturation of the global marketplace. Nations have moved manufacturing strategy and process technology issues to the top of management priority lists. The issues surrounding manufacturing technologies and their implementations have…

  19. In flight measurement of steady and unsteady blade surface pressure of a single rotation large scale advanced prop-fan installed on the PTA aircraft

    NASA Technical Reports Server (NTRS)

    Parzych, D.; Boyd, L.; Meissner, W.; Wyrostek, A.

    1991-01-01

    An experiment was performed by Hamilton Standard, Division of United Technologies Corporation, under contract by LeRC, to measure the blade surface pressure of a large scale, 8 blade model prop-fan in flight. The test bed was the Gulfstream 2 Prop-Fan Test Assessment (PTA) aircraft. The objective of the test was to measure the steady and periodic blade surface pressure resulting from three different Prop-Fan air inflow angles at various takeoff and cruise conditions. The inflow angles were obtained by varying the nacelle tilt angles, which ranged from -3 to +2 degrees. A range of power loadings, tip speeds, and altitudes were tested at each nacelle tilt angle over the flight Mach number range of 0.30 to 0.80. Unsteady blade pressure data tabulated as Fourier coefficients for the first 35 harmonics of shaft rotational frequency and the steady (non-varying) pressure component are presented.

  20. Advances in gamma titanium aluminides and their manufacturing techniques

    NASA Astrophysics Data System (ADS)

    Kothari, Kunal; Radhakrishnan, Ramachandran; Wereley, Norman M.

    2012-11-01

    Gamma titanium aluminides display attractive properties for high temperature applications. For over a decade in the 1990s, the attractive properties of titanium aluminides were outweighed by difficulties encountered in processing and machining at room temperature. But advances in manufacturing technologies, deeper understanding of titanium aluminides microstructure, deformation mechanisms, and advances in micro-alloying, has led to the production of gamma titanium aluminide sheets. An in-depth review of key advances in gamma titanium aluminides is presented, including microstructure, deformation mechanisms, and alloy development. Traditional manufacturing techniques such as ingot metallurgy and investment casting are reviewed and advances via powder metallurgy based manufacturing techniques are discussed. Finally, manufacturing challenges facing gamma titanium aluminides, as well as avenues to overcome them, are discussed.

  1. Evaluation of advanced polymers for additive manufacturing

    SciTech Connect

    Rios, Orlando; Morrison, Crystal

    2015-09-01

    The goal of this Manufacturing Demonstration Facility (MDF) technical collaboration project between Oak Ridge National Laboratory (ORNL) and PPG Industries, Inc. was to evaluate the feasibility of using conventional coatings chemistry and technology to build up material layer-by-layer. The PPG-ORNL study successfully demonstrated that polymeric coatings formulations may overcome many limitations of common thermoplastics used in additive manufacturing (AM), allow lightweight nozzle design for material deposition and increase build rate. The materials effort focused on layer-by-layer deposition of coatings with each layer fusing together. The combination of materials and deposition results in an additively manufactured build that has sufficient mechanical properties to bear the load of additional layers, yet is capable of bonding across the z-layers to improve build direction strength. The formulation properties were tuned to enable a novel, high-throughput deposition method that is highly scalable, compatible with high loading of reinforcing fillers, and is inherently low-cost.

  2. Advanced Blade Manufacturing Project - Final Report

    SciTech Connect

    POORE, ROBERT Z.

    1999-08-01

    The original scope of the project was to research improvements to the processes and materials used in the manufacture of wood-epoxy blades, conduct tests to qualify any new material or processes for use in blade design and subsequently build and test six blades using the improved processes and materials. In particular, ABM was interested in reducing blade cost and improving quality. In addition, ABM needed to find a replacement material for the mature Douglas fir used in the manufacturing process. The use of mature Douglas fir is commercially unacceptable because of its limited supply and environmental concerns associated with the use of mature timber. Unfortunately, the bankruptcy of FloWind in June 1997 and a dramatic reduction in AWT sales made it impossible for ABM to complete the full scope of work. However, sufficient research and testing were completed to identify several promising changes in the blade manufacturing process and develop a preliminary design incorporating these changes.

  3. Advanced manufacturing technologies on color plasma displays

    NASA Astrophysics Data System (ADS)

    Betsui, Keiichi

    2000-06-01

    The mass production of the color plasma display started from 1996. However, since the price of the panel is still expensive, PDPs are not in widespread use at home. It is necessary to develop the new and low-cost manufacturing technologies to reduce the price of the panel. This paper describes some of the features of new fabrication technologies of PDPs.

  4. Energy intensity, electricity consumption, and advanced manufacturing-technology usage

    SciTech Connect

    Doms, M.E.; Dunne, T.

    1995-07-01

    This article reports on the relationship between the usage of advanced manufacturing technologies (AMTs) and energy consumption patterns in manufacturing plants. Using data from the Survey of Manufacturing Technology and the 1987 Census of Manufactures, we model the energy intensity and the electricity intensity of plants as functions of AMT usage and plant age. The main findings are that plants that utilize AMTs are less-energy intensive than plants not using AMTs, but consume proportionately more electricity as a fuel source. Additionally, older plants are generally more energy intensive and rely on fossil fuels to a greater extent than younger plants. 25 refs., 3 tabs.

  5. Advanced materials manufacturing for solar energy

    NASA Astrophysics Data System (ADS)

    van Mierlo, Frank

    2012-02-01

    The US has a robust technical roadmap to get to a 1/W total installed cost with several potential winners in the race. We dominate in the new technology arena and there is a good chance that tomorrow's winning technology will be from the current crop of contenders. One potential breakthrough is Direct Wafer^TM a new manufacturing technique to make silicon wafers at a fraction of the traditional cost. Current wafer manufacturing is a multi-step, energy- and capital-intensive process that wastes half of the valuable silicon feedstock. 1366's Direct Wafer technology forms a standard, 156mm multi-crystalline wafer directly from molten silicon in a semi-continuous, efficient, high-throughput process that eliminates silicon waste. Direct Wafer^TM cuts the amount of consumables by a factor of four and requires only half the capital per GigaWatt production capacity thus enabling solar to compete successfully with coal generated electricity.

  6. Large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.

  7. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2008-09-30

    aerosol species up to six days in advance anywhere on the globe. NAAPS and COAMPS are particularly useful for forecasts of dust storms in areas...impact cloud processes globally. With increasing dust storms due to climate change and land use changes in desert regions, the impact of the...bacteria in large-scale dust storms is expected to significantly impact warm ice cloud formation, human health, and ecosystems globally. In Niemi et al

  8. Galaxy clustering on large scales.

    PubMed Central

    Efstathiou, G

    1993-01-01

    I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe. PMID:11607400

  9. Materials/manufacturing element of the Advanced Turbine Systems Program

    SciTech Connect

    Karnitz, M.A.; Holcomb, R.S.; Wright, I.G.

    1995-10-01

    The technology based portion of the Advanced Turbine Systems Program (ATS) contains several subelements which address generic technology issues for land-based gas-turbine systems. One subelement is the Materials/Manufacturing Technology Program which is coordinated by DOE-Oak Ridge Operations and Oak Ridge National Laboratory (ORNL). The work in this subelement is being performed predominantly by industry with assistance from universities and the national laboratories. Projects in this subelement are aimed toward hastening the incorporation of new materials and components in gas turbines. A materials/manufacturing plan was developed in FY 1994 with input from gas turbine manufacturers, materials suppliers, universities, and government laboratories. The plan outlines seven major subelements which focus on materials issues and manufacturing processes. Work is currently under way in four of the seven major subelements. There are now major projects on coatings and process development, scale-up of single crystal airfoil manufacturing technology, materials characterization, and technology information exchange.

  10. Large-scale planar lightwave circuits

    NASA Astrophysics Data System (ADS)

    Bidnyk, Serge; Zhang, Hua; Pearson, Matt; Balakrishnan, Ashok

    2011-01-01

    By leveraging advanced wafer processing and flip-chip bonding techniques, we have succeeded in hybrid integrating a myriad of active optical components, including photodetectors and laser diodes, with our planar lightwave circuit (PLC) platform. We have combined hybrid integration of active components with monolithic integration of other critical functions, such as diffraction gratings, on-chip mirrors, mode-converters, and thermo-optic elements. Further process development has led to the integration of polarization controlling functionality. Most recently, all these technological advancements have been combined to create large-scale planar lightwave circuits that comprise hundreds of optical elements integrated on chips less than a square inch in size.

  11. Low-speed aerodynamic characteristics from wind-tunnel tests of a large-scale advanced arrow-wing supersonic-cruise transport concept

    NASA Technical Reports Server (NTRS)

    Smith, P. M.

    1978-01-01

    Tests have been conducted to extend the existing low speed aerodynamic data base of advanced supersonic-cruise arrow wing configurations. Principle configuration variables included wing leading-edge flap deflection, wing trailing-edge flap deflection, horizontal tail effectiveness, and fuselage forebody strakes. A limited investigation was also conducted to determine the low speed aerodynamic effects due to slotted training-edge flaps. Results of this investigation demonstrate that deflecting the wing leading-edge flaps downward to suppress the wing apex vortices provides improved static longitudinal stability; however, it also results in significantly reduced static directional stability. The use of a selected fuselage forebody strakes is found to be effective in increasing the level of positive static directional stability. Drooping the fuselage nose, which is required for low-speed pilot vision, significantly improves the later-directional trim characteristics.

  12. Soft computing in design and manufacturing of advanced materials

    NASA Technical Reports Server (NTRS)

    Cios, Krzysztof J.; Baaklini, George Y; Vary, Alex

    1993-01-01

    The potential of fuzzy sets and neural networks, often referred to as soft computing, for aiding in all aspects of manufacturing of advanced materials like ceramics is addressed. In design and manufacturing of advanced materials, it is desirable to find which of the many processing variables contribute most to the desired properties of the material. There is also interest in real time quality control of parameters that govern material properties during processing stages. The concepts of fuzzy sets and neural networks are briefly introduced and it is shown how they can be used in the design and manufacturing processes. These two computational methods are alternatives to other methods such as the Taguchi method. The two methods are demonstrated by using data collected at NASA Lewis Research Center. Future research directions are also discussed.

  13. The Effect of the Implementation of Advanced Manufacturing Technologies on Training in the Manufacturing Sector

    ERIC Educational Resources Information Center

    Castrillon, Isabel Dieguez; Cantorna, Ana I. Sinde

    2005-01-01

    Purpose: The aim of this article is to gain insight into some of the factors that determine personnel-training efforts in companies introducing advanced manufacturing technologies (AMTs). The study provides empirical evidence from a sector with high rates of technological modernisation. Design/methodology/approach: "Ad hoc" survey of 90…

  14. Large-scale neuromorphic computing systems

    NASA Astrophysics Data System (ADS)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  15. Process development status report for advanced manufacturing projects

    SciTech Connect

    Brinkman, J.R.; Homan, D.A.

    1990-03-30

    This is the final status report for the approved Advanced Manufacturing Projects for FY 1989. Five of the projects were begun in FY 1987, one in FY 1988, and one in FY 1989. The approved projects cover technology areas in welding, explosive material processing and evaluation, ion implantation, and automated manufacturing. It is expected that the successful completion of these projects well result in improved quality and/or reduced cost for components produced by Mound. Those projects not brought to completion will be continued under Process development in FY 1990.

  16. Advanced composite aileron for L-1011 transport aircraft: Aileron manufacture

    NASA Technical Reports Server (NTRS)

    Dunning, E. G.; Cobbs, W. L.; Legg, R. L.

    1981-01-01

    The fabrication activities of the Advanced Composite Aileron (ACA) program are discussed. These activities included detail fabrication, manufacturing development, assembly, repair and quality assurance. Five ship sets of ailerons were manufactured. The detail fabrication effort of ribs, spar and covers was accomplished on male tools to a common cure cycle. Graphite epoxy tape and fabric and syntactic epoxy materials were utilized in the fabrication. The ribs and spar were net cured and required no post cure trim. Material inconsistencies resulted in manufacturing development of the front spar during the production effort. The assembly effort was accomplished in subassembly and assembly fixtures. The manual drilling system utilized a dagger type drill in a hydraulic feed control hand drill. Coupon testing for each detail was done.

  17. Spacesuit glove manufacturing enhancements through the use of advanced technologies

    NASA Technical Reports Server (NTRS)

    Cadogan, David; Bradley, David; Kosmo, Joseph

    1993-01-01

    The sucess of astronauts performing extravehicular activity (EVA) on orbit is highly dependent upon the performance of their spacesuit gloves.A study has recently been conducted to advance the development and manufacture of spacesuit gloves. The process replaces the manual techniques of spacesuit glove manufacture by utilizing emerging technologies such as laser scanning, Computer Aided Design (CAD), computer generated two-dimensional patterns from three-dimensionl surfaces, rapid prototyping technology, and laser cutting of materials, to manufacture the new gloves. Results of the program indicate that the baseline process will not increase the cost of the gloves as compared to the existing styles, and in production, may reduce the cost of the gloves. perhaps the most important outcome of the Laserscan process is that greater accuracy and design control can be realized. Greater accuracy was achieved in the baseline anthropometric measurement and CAD data measurement which subsequently improved the design feature. This effectively enhances glove performance through better fit and comfort.

  18. Integration of Advanced Simulation and Visualization for Manufacturing Process Optimization

    NASA Astrophysics Data System (ADS)

    Zhou, Chenn; Wang, Jichao; Tang, Guangwu; Moreland, John; Fu, Dong; Wu, Bin

    2016-05-01

    The integration of simulation and visualization can provide a cost-effective tool for process optimization, design, scale-up and troubleshooting. The Center for Innovation through Visualization and Simulation (CIVS) at Purdue University Northwest has developed methodologies for such integration with applications in various manufacturing processes. The methodologies have proven to be useful for virtual design and virtual training to provide solutions addressing issues on energy, environment, productivity, safety, and quality in steel and other industries. In collaboration with its industrial partnerships, CIVS has provided solutions to companies, saving over US38 million. CIVS is currently working with the steel industry to establish an industry-led Steel Manufacturing Simulation and Visualization Consortium through the support of National Institute of Standards and Technology AMTech Planning Grant. The consortium focuses on supporting development and implementation of simulation and visualization technologies to advance steel manufacturing across the value chain.

  19. Phase 1 Development Testing of the Advanced Manufacturing Demonstrator Engine

    NASA Technical Reports Server (NTRS)

    Case, Nicholas L.; Eddleman, David E.; Calvert, Marty R.; Bullard, David B.; Martin, Michael A.; Wall, Thomas R.

    2016-01-01

    The Additive Manufacturing Development Breadboard Engine (BBE) is a pressure-fed liquid oxygen/pump-fed liquid hydrogen (LOX/LH2) expander cycle engine that was built and operated by NASA at Marshall Space Flight Center's East Test Area. The breadboard engine was conceived as a technology demonstrator for the additive manufacturing technologies for an advanced upper stage prototype engine. The components tested on the breadboard engine included an ablative chamber, injector, main fuel valve, turbine bypass valve, a main oxidizer valve, a mixer and the fuel turbopump. All parts minus the ablative chamber were additively manufactured. The BBE was successfully hot fire tested seven times. Data collected from the test series will be used for follow on demonstration tests with a liquid oxygen turbopump and a regeneratively cooled chamber and nozzle.

  20. Spacesuit glove manufacturing enhancements through the use of advanced technologies

    NASA Astrophysics Data System (ADS)

    Cadogan, David; Bradley, David; Kosmo, Joseph

    The sucess of astronauts performing extravehicular activity (EVA) on orbit is highly dependent upon the performance of their spacesuit gloves.A study has recently been conducted to advance the development and manufacture of spacesuit gloves. The process replaces the manual techniques of spacesuit glove manufacture by utilizing emerging technologies such as laser scanning, Computer Aided Design (CAD), computer generated two-dimensional patterns from three-dimensionl surfaces, rapid prototyping technology, and laser cutting of materials, to manufacture the new gloves. Results of the program indicate that the baseline process will not increase the cost of the gloves as compared to the existing styles, and in production, may reduce the cost of the gloves. perhaps the most important outcome of the Laserscan process is that greater accuracy and design control can be realized. Greater accuracy was achieved in the baseline anthropometric measurement and CAD data measurement which subsequently improved the design feature. This effectively enhances glove performance through better fit and comfort.

  1. Advanced Manufacturing Systems in Food Processing and Packaging Industry

    NASA Astrophysics Data System (ADS)

    Shafie Sani, Mohd; Aziz, Faieza Abdul

    2013-06-01

    In this paper, several advanced manufacturing systems in food processing and packaging industry are reviewed, including: biodegradable smart packaging and Nano composites, advanced automation control system consists of fieldbus technology, distributed control system and food safety inspection features. The main purpose of current technology in food processing and packaging industry is discussed due to major concern on efficiency of the plant process, productivity, quality, as well as safety. These application were chosen because they are robust, flexible, reconfigurable, preserve the quality of the food, and efficient.

  2. Advanced Manufacturing for a U.S. Clean Energy Economy (Fact Sheet)

    SciTech Connect

    Not Available

    2012-03-01

    This fact sheet is an overview of the U.S. Department of Energy's Advanced Manufacturing Office. Manufacturing is central to our economy, culture, and history. The industrial sector produces 11% of U.S. gross domestic product (GDP), employs 12 million people, and generates 57% of U.S. export value. However, U.S. industry consumes about one-third of all energy produced in the United States, and significant cost-effective energy efficiency and advanced manufacturing opportunities remain unexploited. As a critical component of the National Innovation Policy for Advanced Manufacturing, the U.S. Department of Energy's (DOE's) Advanced Manufacturing Office (AMO) is focused on creating a fertile environment for advanced manufacturing innovation, enabling vigorous domestic development of transformative manufacturing technologies, promoting coordinated public and private investment in precompetitive advanced manufacturing technology infrastructure, and facilitating the rapid scale-up and market penetration of advanced manufacturing technologies.

  3. Measurement of the steady surface pressure distribution on a single rotation large scale advanced prop-fan blade at Mach numbers from 0.03 to 0.78

    NASA Technical Reports Server (NTRS)

    Bushnell, Peter

    1988-01-01

    The aerodynamic pressure distribution was determined on a rotating Prop-Fan blade at the S1-MA wind tunnel facility operated by the Office National D'Etudes et de Recherches Aerospatiale (ONERA) in Modane, France. The pressure distributions were measured at thirteen radial stations on a single rotation Large Scale Advanced Prop-Fan (LAP/SR7) blade, for a sequence of operating conditions including inflow Mach numbers ranging from 0.03 to 0.78. Pressure distributions for more than one power coefficient and/or advanced ratio setting were measured for most of the inflow Mach numbers investigated. Due to facility power limitations the Prop-Fan test installation was a two bladed version of the eight design configuration. The power coefficient range investigated was therefore selected to cover typical power loading per blade conditions which occur within the Prop-Fan operating envelope. The experimental results provide an extensive source of information on the aerodynamic behavior of the swept Prop-Fan blade, including details which were elusive to current computational models and do not appear in the two-dimensional airfoil data.

  4. 78 FR 34346 - Proposed Information Collection; Comment Request; NIST MEP Advanced Manufacturing Jobs and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-07

    ... Advanced Manufacturing Jobs and Innovation Accelerator Challenge (AMJIAC) Client Impact Survey AGENCY... information collection. The purpose of the Advanced Manufacturing Jobs and Innovation Accelerator Challenge... to support job creation, encourage economic development, and enhance the competitiveness of...

  5. National Center for Advanced Information Components Manufacturing. Program summary report, Volume 1

    SciTech Connect

    1996-10-01

    The National Center for Advanced Information Components Manufacturing focused on manufacturing research and development for flat panel displays, advanced lithography, microelectronics, and optoelectronics. This report provides an overview of the program, summaries of the technical projects, and key program accomplishments.

  6. Needs, opportunities, and options for large scale systems research

    SciTech Connect

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  7. Large-scale PACS implementation.

    PubMed

    Carrino, J A; Unkel, P J; Miller, I D; Bowser, C L; Freckleton, M W; Johnson, T G

    1998-08-01

    The transition to filmless radiology is a much more formidable task than making the request for proposal to purchase a (Picture Archiving and Communications System) PACS. The Department of Defense and the Veterans Administration have been pioneers in the transformation of medical diagnostic imaging to the electronic environment. Many civilian sites are expected to implement large-scale PACS in the next five to ten years. This presentation will related the empirical insights gleaned at our institution from a large-scale PACS implementation. Our PACS integration was introduced into a fully operational department (not a new hospital) in which work flow had to continue with minimal impact. Impediments to user acceptance will be addressed. The critical components of this enormous task will be discussed. The topics covered during this session will include issues such as phased implementation, DICOM (digital imaging and communications in medicine) standard-based interaction of devices, hospital information system (HIS)/radiology information system (RIS) interface, user approval, networking, workstation deployment and backup procedures. The presentation will make specific suggestions regarding the implementation team, operating instructions, quality control (QC), training and education. The concept of identifying key functional areas is relevant to transitioning the facility to be entirely on line. Special attention must be paid to specific functional areas such as the operating rooms and trauma rooms where the clinical requirements may not match the PACS capabilities. The printing of films may be necessary for certain circumstances. The integration of teleradiology and remote clinics into a PACS is a salient topic with respect to the overall role of the radiologists providing rapid consultation. A Web-based server allows a clinician to review images and reports on a desk-top (personal) computer and thus reduce the number of dedicated PACS review workstations. This session

  8. 10 CFR 611.202 - Advanced Technology Vehicle Manufacturing Facility Award Program.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Advanced Technology Vehicle Manufacturing Facility Award... TECHNOLOGY VEHICLES MANUFACTURER ASSISTANCE PROGRAM Facility/Funding Awards § 611.202 Advanced Technology Vehicle Manufacturing Facility Award Program. DOE may issue, under the Advanced Technology...

  9. 10 CFR 611.202 - Advanced Technology Vehicle Manufacturing Facility Award Program.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Advanced Technology Vehicle Manufacturing Facility Award... TECHNOLOGY VEHICLES MANUFACTURER ASSISTANCE PROGRAM Facility/Funding Awards § 611.202 Advanced Technology Vehicle Manufacturing Facility Award Program. DOE may issue, under the Advanced Technology...

  10. Large-Scale Sequence Comparison.

    PubMed

    Lal, Devi; Verma, Mansi

    2017-01-01

    There are millions of sequences deposited in genomic databases, and it is an important task to categorize them according to their structural and functional roles. Sequence comparison is a prerequisite for proper categorization of both DNA and protein sequences, and helps in assigning a putative or hypothetical structure and function to a given sequence. There are various methods available for comparing sequences, alignment being first and foremost for sequences with a small number of base pairs as well as for large-scale genome comparison. Various tools are available for performing pairwise large sequence comparison. The best known tools either perform global alignment or generate local alignments between the two sequences. In this chapter we first provide basic information regarding sequence comparison. This is followed by the description of the PAM and BLOSUM matrices that form the basis of sequence comparison. We also give a practical overview of currently available methods such as BLAST and FASTA, followed by a description and overview of tools available for genome comparison including LAGAN, MumMER, BLASTZ, and AVID.

  11. Large Scale Magnetostrictive Valve Actuator

    NASA Technical Reports Server (NTRS)

    Richard, James A.; Holleman, Elizabeth; Eddleman, David

    2008-01-01

    Marshall Space Flight Center's Valves, Actuators and Ducts Design and Development Branch developed a large scale magnetostrictive valve actuator. The potential advantages of this technology are faster, more efficient valve actuators that consume less power and provide precise position control and deliver higher flow rates than conventional solenoid valves. Magnetostrictive materials change dimensions when a magnetic field is applied; this property is referred to as magnetostriction. Magnetostriction is caused by the alignment of the magnetic domains in the material s crystalline structure and the applied magnetic field lines. Typically, the material changes shape by elongating in the axial direction and constricting in the radial direction, resulting in no net change in volume. All hardware and testing is complete. This paper will discuss: the potential applications of the technology; overview of the as built actuator design; discuss problems that were uncovered during the development testing; review test data and evaluate weaknesses of the design; and discuss areas for improvement for future work. This actuator holds promises of a low power, high load, proportionally controlled actuator for valves requiring 440 to 1500 newtons load.

  12. Large scale cluster computing workshop

    SciTech Connect

    Dane Skow; Alan Silverman

    2002-12-23

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community.

  13. A manufacturing database of advanced materials used in spacecraft structures

    NASA Technical Reports Server (NTRS)

    Bao, Han P.

    1994-01-01

    Cost savings opportunities over the life cycle of a product are highest in the early exploratory phase when different design alternatives are evaluated not only for their performance characteristics but also their methods of fabrication which really control the ultimate manufacturing costs of the product. In the past, Design-To-Cost methodologies for spacecraft design concentrated on the sizing and weight issues more than anything else at the early so-called 'Vehicle Level' (Ref: DOD/NASA Advanced Composites Design Guide). Given the impact of manufacturing cost, the objective of this study is to identify the principal cost drivers for each materials technology and propose a quantitative approach to incorporating these cost drivers into the family of optimization tools used by the Vehicle Analysis Branch of NASA LaRC to assess various conceptual vehicle designs. The advanced materials being considered include aluminum-lithium alloys, thermoplastic graphite-polyether etherketone composites, graphite-bismaleimide composites, graphite- polyimide composites, and carbon-carbon composites. Two conventional materials are added to the study to serve as baseline materials against which the other materials are compared. These two conventional materials are aircraft aluminum alloys series 2000 and series 7000, and graphite-epoxy composites T-300/934. The following information is available in the database. For each material type, the mechanical, physical, thermal, and environmental properties are first listed. Next the principal manufacturing processes are described. Whenever possible, guidelines for optimum processing conditions for specific applications are provided. Finally, six categories of cost drivers are discussed. They include, design features affecting processing, tooling, materials, fabrication, joining/assembly, and quality assurance issues. It should be emphasized that this database is not an exhaustive database. Its primary use is to make the vehicle designer

  14. Feature-based tolerancing for advanced manufacturing applications

    SciTech Connect

    Brown, C.W.; Kirk, W.J. III; Simons, W.R.; Ward, R.C.; Brooks, S.L.

    1994-11-01

    A primary requirement for the successful deployment of advanced manufacturing applications is the need for a complete and accessible definition of the product. This product definition must not only provide an unambiguous description of a product`s nominal shape but must also contain complete tolerance specification and general property attributes. Likewise, the product definition`s geometry, topology, tolerance data, and modeler manipulative routines must be fully accessible through a robust application programmer interface. This paper describes a tolerancing capability using features that complements a geometric solid model with a representation of conventional and geometric tolerances and non-shape property attributes. This capability guarantees a complete and unambiguous definition of tolerances for manufacturing applications. An object-oriented analysis and design of the feature-based tolerance domain was performed. The design represents and relates tolerance features, tolerances, and datum reference frames. The design also incorporates operations that verify correctness and check for the completeness of the overall tolerance definition. The checking algorithm is based upon the notion of satisfying all of a feature`s toleranceable aspects. Benefits from the feature-based tolerance modeler include: advancing complete product definition initiatives, incorporating tolerances in product data exchange, and supplying computer-integrated manufacturing applications with tolerance information.

  15. Advances in the manufacturing, types, and applications of biosensors

    NASA Astrophysics Data System (ADS)

    Ravindra, Nuggehalli M.; Prodan, Camelia; Fnu, Shanmugamurthy; Padronl, Ivan; Sikha, Sushil K.

    2007-12-01

    In recent years, there have been significant technological advancements in the manufacturing, types, and applications of biosensors. Applications include clinical and non-clinical diagnostics for home, bio-defense, bio-remediation, environment, agriculture, and the food industry. Biosensors have progressed beyond the detection of biological threats such as anthrax and are finding use in a number of non-biological applications. Emerging biosensor technologies such as lab-on-a-chip have revolutionized the integration approaches for a very flexible, innovative, and user-friendly platform. An overview of the fundamentals, types, applications, and manufacturers, as well as the market trends of biosensors is presented here. Two case studies are discussed: one focused on a characterization technique—patch clamping and dielectric spectroscopy as a biological sensor—and the other about lithium phthalocyanine, a material that is being developed for in-vivo oxymetry.

  16. Large-Scale Information Systems

    SciTech Connect

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  17. Large-scale parametric survival analysis.

    PubMed

    Mittal, Sushil; Madigan, David; Cheng, Jerry Q; Burd, Randall S

    2013-10-15

    Survival analysis has been a topic of active statistical research in the past few decades with applications spread across several areas. Traditional applications usually consider data with only a small numbers of predictors with a few hundreds or thousands of observations. Recent advances in data acquisition techniques and computation power have led to considerable interest in analyzing very-high-dimensional data where the number of predictor variables and the number of observations range between 10(4) and 10(6). In this paper, we present a tool for performing large-scale regularized parametric survival analysis using a variant of the cyclic coordinate descent method. Through our experiments on two real data sets, we show that application of regularized models to high-dimensional data avoids overfitting and can provide improved predictive performance and calibration over corresponding low-dimensional models.

  18. Large scale cryogenic fluid systems testing

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA Lewis Research Center's Cryogenic Fluid Systems Branch (CFSB) within the Space Propulsion Technology Division (SPTD) has the ultimate goal of enabling the long term storage and in-space fueling/resupply operations for spacecraft and reusable vehicles in support of space exploration. Using analytical modeling, ground based testing, and on-orbit experimentation, the CFSB is studying three primary categories of fluid technology: storage, supply, and transfer. The CFSB is also investigating fluid handling, advanced instrumentation, and tank structures and materials. Ground based testing of large-scale systems is done using liquid hydrogen as a test fluid at the Cryogenic Propellant Tank Facility (K-site) at Lewis' Plum Brook Station in Sandusky, Ohio. A general overview of tests involving liquid transfer, thermal control, pressure control, and pressurization is given.

  19. Advanced manufacturing technologies for the BeCOAT telescope

    NASA Astrophysics Data System (ADS)

    Sweeney, Michael N.; Rajic, Slobodan; Seals, Roland D.

    1994-02-01

    The beryllium cryogenic off-axis telescope (BeCOAT) uses a two-mirror, non re-imaging, off- axis, Ritchey Chretian design with all-beryllium optics, structures and baffles. The purpose of this telescope is the system level demonstration of advanced manufacturing technologies for optics, optical benches, and baffle assemblies. The key issues that are addressed are single point diamond turning of beryllium optics, survivable fastening techniques, minimum beryllium utilization, and technologies leading to self-aligning, all-beryllium optical systems.

  20. Advanced computational research in materials processing for design and manufacturing

    SciTech Connect

    Zacharia, T.

    1994-12-31

    The computational requirements for design and manufacture of automotive components have seen dramatic increases for producing automobiles with three times the mileage. Automotive component design systems are becoming increasingly reliant on structural analysis requiring both overall larger analysis and more complex analyses, more three-dimensional analyses, larger model sizes, and routine consideration of transient and non-linear effects. Such analyses must be performed rapidly to minimize delays in the design and development process, which drives the need for parallel computing. This paper briefly describes advanced computational research in superplastic forming and automotive crash worthiness.

  1. Large-scale Digitoxin Intoxication

    PubMed Central

    Lely, A. H.; Van Enter, C. H. J.

    1970-01-01

    Because of an error in the manufacture of digoxin tablets a large number of patients took tablets that contained 0·20 mg. of digitoxin and 0·05 mg. of digoxin instead of the prescribed 0·25 mg. of digoxin. The symptoms are described of 179 patients who took these tablets and suffered from digitalis intoxication. Of these patients, 125 had taken the faultily composed tablets for more than three weeks. In 48 patients 105 separate disturbances in rhythm or in atrioventricular conduction were observed on the electrocardiogram. Extreme fatigue and serious eye conditions were observed in 95% of the patients. Twelve patients had a transient psychosis. Extensive ophthalmological observations indicated that the visual complaints were most probably caused by a transient retrobulbar neuritis. PMID:5273245

  2. Advanced Manufacturing Processes Laboratory Building 878 hazards assessment document

    SciTech Connect

    Wood, C.; Thornton, W.; Swihart, A.; Gilman, T.

    1994-07-01

    The introduction of the hazards assessment process is to document the impact of the release of hazards at the Advanced Manufacturing Processes Laboratory (AMPL) that are significant enough to warrant consideration in Sandia National Laboratories` operational emergency management program. This hazards assessment is prepared in accordance with the Department of Energy Order 5500.3A requirement that facility-specific hazards assessments be prepared, maintained, and used for emergency planning purposes. This hazards assessment provides an analysis of the potential airborne release of chemicals associated with the operations and processes at the AMPL. This research and development laboratory develops advanced manufacturing technologies, practices, and unique equipment and provides the fabrication of prototype hardware to meet the needs of Sandia National Laboratories, Albuquerque, New Mexico (SNL/NM). The focus of the hazards assessment is the airborne release of materials because this requires the most rapid, coordinated emergency response on the part of the AMPL, SNL/NM, collocated facilities, and surrounding jurisdiction to protect workers, the public, and the environment.

  3. 10 CFR 611.202 - Advanced Technology Vehicle Manufacturing Facility Award Program.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Advanced Technology Vehicle Manufacturing Facility Award... Vehicle Manufacturing Facility Award Program. DOE may issue, under the Advanced Technology Vehicle Manufacturing Facility Award Program, 10 CFR part 611, subpart C, awards for eligible projects....

  4. 10 CFR 611.202 - Advanced Technology Vehicle Manufacturing Facility Award Program.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 4 2013-01-01 2013-01-01 false Advanced Technology Vehicle Manufacturing Facility Award... Vehicle Manufacturing Facility Award Program. DOE may issue, under the Advanced Technology Vehicle Manufacturing Facility Award Program, 10 CFR part 611, subpart C, awards for eligible projects....

  5. 10 CFR 611.202 - Advanced Technology Vehicle Manufacturing Facility Award Program.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Advanced Technology Vehicle Manufacturing Facility Award... Vehicle Manufacturing Facility Award Program. DOE may issue, under the Advanced Technology Vehicle Manufacturing Facility Award Program, 10 CFR part 611, subpart C, awards for eligible projects....

  6. Influence of Manufacturing Processes and Microstructures on the Performance and Manufacturability of Advanced High Strength Steels

    SciTech Connect

    Choi, Kyoo Sil; Liu, Wenning N.; Sun, Xin; Khaleel, Mohammad A.

    2009-10-01

    Advanced high strength steels (AHSS) are performance-based steel grades and their global material properties can be achieved with various steel chemistries and manufacturing processes, leading to various microstructures. In this paper, we investigate the influence of supplier variation and resulting microstructure difference on the overall mechanical properties as well as local formability behaviors of advanced high strength steels (AHSS). For this purpose, we first examined the basic material properties and the transformation kinetics of TRansformation Induced Plasticity (TRIP) 800 steels from three different suppliers under different testing temperatures. The experimental results show that there is a significant supplier (i.e., manufacturing process) dependency of the TRIP 800 steel mechanical and microstructure properties. Next, we examined the local formability of two commercial Dual Phase (DP) 980 steels during stamping process. The two commercial DP 980 steels also exhibit noticeably different formability during stamping process in the sense that one of them shows severe tendency for shear fracture. Microstructure-based finite element analyses are carried out next to simulate the localized deformation process with the two DP 980 microstructures, and the results suggest that the possible reason for the difference in formability lies in the morphology of the hard martensite phase in the DP microstructure.

  7. Large-scale autostereoscopic outdoor display

    NASA Astrophysics Data System (ADS)

    Reitterer, Jörg; Fidler, Franz; Saint Julien-Wallsee, Ferdinand; Schmid, Gerhard; Gartner, Wolfgang; Leeb, Walter; Schmid, Ulrich

    2013-03-01

    State-of-the-art autostereoscopic displays are often limited in size, effective brightness, number of 3D viewing zones, and maximum 3D viewing distances, all of which are mandatory requirements for large-scale outdoor displays. Conventional autostereoscopic indoor concepts like lenticular lenses or parallax barriers cannot simply be adapted for these screens due to the inherent loss of effective resolution and brightness, which would reduce both image quality and sunlight readability. We have developed a modular autostereoscopic multi-view laser display concept with sunlight readable effective brightness, theoretically up to several thousand 3D viewing zones, and maximum 3D viewing distances of up to 60 meters. For proof-of-concept purposes a prototype display with two pixels was realized. Due to various manufacturing tolerances each individual pixel has slightly different optical properties, and hence the 3D image quality of the display has to be calculated stochastically. In this paper we present the corresponding stochastic model, we evaluate the simulation and measurement results of the prototype display, and we calculate the achievable autostereoscopic image quality to be expected for our concept.

  8. Advanced manufacturing by spray forming: Aluminum strip and microelectromechanical systems

    SciTech Connect

    McHugh, K.M.

    1994-12-31

    Spray forming is an advanced materials processing technology that converts a bulk liquid metal to a near-net-shape solid by depositing atomized droplets onto a suitably shaped substrate. By combining rapid solidification processing with product shape control, spray forming can reduce manufacturing costs while improving product quality. INEL is developing a unique spray-forming method based on de Laval (converging/diverging) nozzle designs to produce near-net-shape solids and coatings of metals, polymers, and composite materials. Properties of the spray-formed material are tailored by controlling the characteristics of the spray plume and substrate. Two examples are described: high-volume production of aluminum alloy strip, and the replication of micron-scale features in micropatterned polymers during the production of microelectromechanical systems.

  9. Developing novel 3D antennas using advanced additive manufacturing technology

    NASA Astrophysics Data System (ADS)

    Mirzaee, Milad

    In today's world of wireless communication systems, antenna engineering is rapidly advancing as the wireless services continue to expand in support of emerging commercial applications. Antennas play a key role in the performance of advanced transceiver systems where they serve to convert electric power to electromagnetic waves and vice versa. Researchers have held significant interest in developing this crucial component for wireless communication systems by employing a variety of design techniques. In the past few years, demands for electrically small antennas continues to increase, particularly among portable and mobile wireless devices, medical electronics and aerospace systems. This trend toward smaller electronic devices makes the three dimensional (3D) antennas very appealing, since they can be designed in a way to use every available space inside the devise. Additive Manufacturing (AM) method could help to find great solutions for the antennas design for next generation of wireless communication systems. In this thesis, the design and fabrication of 3D printed antennas using AM technology is studied. To demonstrate this application of AM, different types of antennas structures have been designed and fabricated using various manufacturing processes. This thesis studies, for the first time, embedded conductive 3D printed antennas using PolyLactic Acid (PLA) and Acrylonitrile Butadiene Styrene (ABS) for substrate parts and high temperature carbon paste for conductive parts which can be a good candidate to overcome the limitations of direct printing on 3D surfaces that is the most popular method to fabricate conductive parts of the antennas. This thesis also studies, for the first time, the fabrication of antennas with 3D printed conductive parts which can contribute to the new generation of 3D printed antennas.

  10. Co-Extrusion: Advanced Manufacturing for Energy Devices

    SciTech Connect

    Cobb, Corie Lynn

    2016-11-18

    The development of mass markets for large-format batteries, including electric vehicles (EVs) and grid support, depends on both cost reductions and performance enhancements to improve their economic viability. Palo Alto Research Center (PARC) has developed a multi-material, advanced manufacturing process called co-extrusion (CoEx) to remove multiple steps in a conventional battery coating process with the potential to simultaneously increase battery energy and power density. CoEx can revolutionize battery manufacturing across most chemistries, significantly lowering end-product cost and shifting the underlying economics to make EVs and other battery applications a reality. PARC’s scale-up of CoEx for electric vehicle (EV) batteries builds on a solid base of experience in applying CoEx to solar cell manufacturing, deposition of viscous ceramic pastes, and Li-ion battery chemistries. In the solar application, CoEx has been deployed commercially at production scale where multi-channel CoEx printheads are used to print viscous silver gridline pastes at full production speeds (>40 ft/min). This operational scale-up provided invaluable experience with the nuances of speed, yield, and maintenance inherent in taking a new technology to the factory floor. PARC has leveraged this experience, adapting the CoEx process for Lithium-ion (Li-ion) battery manufacturing. To date, PARC has worked with Li-ion battery materials and structured cathodes with high-density Li-ion regions and low-density conduction regions, documenting both energy and power performance. Modeling results for a CoEx cathode show a path towards a 10-20% improvement in capacity for an EV pouch cell. Experimentally, we have realized a co-extruded battery structure with a Lithium Nickel Manganese Cobalt (NMC) cathode at print speeds equivalent to conventional roll coating processes. The heterogeneous CoEx cathode enables improved capacity in thick electrodes at higher C-rates. The proof-of-principle coin cells

  11. Lessons from Large-Scale Renewable Energy Integration Studies: Preprint

    SciTech Connect

    Bird, L.; Milligan, M.

    2012-06-01

    In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

  12. Large-scale wind turbine structures

    NASA Technical Reports Server (NTRS)

    Spera, David A.

    1988-01-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  13. Advanced composites structural concepts and materials technologies for primary aircraft structures: Design/manufacturing concept assessment

    NASA Technical Reports Server (NTRS)

    Chu, Robert L.; Bayha, Tom D.; Davis, HU; Ingram, J. ED; Shukla, Jay G.

    1992-01-01

    Composite Wing and Fuselage Structural Design/Manufacturing Concepts have been developed and evaluated. Trade studies were performed to determine how well the concepts satisfy the program goals of 25 percent cost savings, 40 percent weight savings with aircraft resizing, and 50 percent part count reduction as compared to the aluminum Lockheed L-1011 baseline. The concepts developed using emerging technologies such as large scale resin transfer molding (RTM), automatic tow placed (ATP), braiding, out-of-autoclave and automated manufacturing processes for both thermoset and thermoplastic materials were evaluated for possible application in the design concepts. Trade studies were used to determine which concepts carry into the detailed design development subtask.

  14. Impacts of advanced manufacturing technology on parametric estimating

    NASA Astrophysics Data System (ADS)

    Hough, Paul G.

    1989-12-01

    The introduction of advanced manufacturing technology in the aerospace industry poses serious challenges for government cost analysts. Traditionally, the analysts have relied on parametric estimating techniques for both planning and budgeting. Despite its problems, this approach has proven to be a remarkably useful and robust tool for estimating new weapon system costs. However, rapid improvements in both product and process technology could exacerbate current difficulties, and diminish the utility of the parametric approach. This paper reviews some weakness associated with parametrics, then proceeds to examine how specific aspects of the factory of the future may further impact parametric estimating, and suggests avenues of research for their resolution. This paper is an extended version of Cost Estimating for the Factory of the Future. Parametric estimating is a method by which aggregated costs are derived as a function of high-level product characteristics or parameters. The resulting equations are known as cost estimating relationships (CERs). Such equations are particularly useful when detailed technical specifications are not available.

  15. Prosperity Game: Advanced Manufacturing Day, May 17, 1994

    SciTech Connect

    Berman, M.

    1994-12-01

    Prosperity Games are an outgrowth and adaptation of move/countermove and seminar War Games. Prosperity Games are simulations that explore complex issues in a variety of areas including economics, politics, sociology, environment, education and research. These issues can be examined from a variety of perspectives ranging from a global, macroeconomic and geopolitical viewpoint down to the details of customer/supplier/market interactions in specific industries. All Prosperity Games are unique in that both the game format and the player contributions vary from game to game. This report documents a 90-minute Prosperity Game conducted as part of Advanced Manufacturing Day on May 17, 1994. This was the fourth game conducted under the direction of the Center for National Industrial Alliances at Sandia. Although previous games lasted from one to two days, this abbreviated game produced interesting and important results. Most of the strategies proposed in previous games were reiterated here. These included policy changes in international trade, tax laws, the legal system, and the educational system. Government support of new technologies was encouraged as well as government-industry partnerships. The importance of language in international trade was an original contribution of this game. The deliberations and recommendations of these teams provide valuable insights as to the views of this diverse group of decision makers concerning policy changes, foreign competition, and the development, delivery and commercialization of new technologies.

  16. National Center for Advanced Information Components Manufacturing. Program summary report, Volume II

    SciTech Connect

    1996-10-01

    The National Center for Advanced Information Components Manufacturing focused on manufacturing research and development for flat panel displays, advanced lithography, microelectronics, and optoelectronics. This report provides an overview of the program, program history, summaries of the technical projects, and key program accomplishments.

  17. Advanced Manufacturing Technologies (AMT): Additive Manufactured Hot Fire Planning and Testing in GRC Cell 32 Project

    NASA Technical Reports Server (NTRS)

    Fikes, John C.

    2014-01-01

    The objective of this project is to hot fire test an additively manufactured thrust chamber assembly TCA (injector and thrust chamber). GRC will install the additively manufactured Inconel 625 injector, two additively manufactured (SLM) water cooled Cu-Cr thrust chamber barrels and one additively manufactured (SLM) water cooled Cu-Cr thrust chamber nozzle on the test stand in Cell 32 and perform hot fire testing of the integrated TCA.

  18. Large-scale regions of antimatter

    SciTech Connect

    Grobov, A. V. Rubin, S. G.

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  19. Large Scale Flame Spread Environmental Characterization Testing

    NASA Technical Reports Server (NTRS)

    Clayman, Lauren K.; Olson, Sandra L.; Gokoghi, Suleyman A.; Brooker, John E.; Ferkul, Paul V.; Kacher, Henry F.

    2013-01-01

    Under the Advanced Exploration Systems (AES) Spacecraft Fire Safety Demonstration Project (SFSDP), as a risk mitigation activity in support of the development of a large-scale fire demonstration experiment in microgravity, flame-spread tests were conducted in normal gravity on thin, cellulose-based fuels in a sealed chamber. The primary objective of the tests was to measure pressure rise in a chamber as sample material, burning direction (upward/downward), total heat release, heat release rate, and heat loss mechanisms were varied between tests. A Design of Experiments (DOE) method was imposed to produce an array of tests from a fixed set of constraints and a coupled response model was developed. Supplementary tests were run without experimental design to additionally vary select parameters such as initial chamber pressure. The starting chamber pressure for each test was set below atmospheric to prevent chamber overpressure. Bottom ignition, or upward propagating burns, produced rapid acceleratory turbulent flame spread. Pressure rise in the chamber increases as the amount of fuel burned increases mainly because of the larger amount of heat generation and, to a much smaller extent, due to the increase in gaseous number of moles. Top ignition, or downward propagating burns, produced a steady flame spread with a very small flat flame across the burning edge. Steady-state pressure is achieved during downward flame spread as the pressure rises and plateaus. This indicates that the heat generation by the flame matches the heat loss to surroundings during the longer, slower downward burns. One heat loss mechanism included mounting a heat exchanger directly above the burning sample in the path of the plume to act as a heat sink and more efficiently dissipate the heat due to the combustion event. This proved an effective means for chamber overpressure mitigation for those tests producing the most total heat release and thusly was determined to be a feasible mitigation

  20. Ergonomic Challenges in Conventional and Advanced Apparel Manufacturing.

    DTIC Science & Technology

    1992-10-01

    control systems. In Skill Based Automated Manufacturing : Proceedings of the IFAC Workshop by P. Brodner (ed.) (Pergamon Press, New York), pp. 47 - 52... In Skill Based Automated Manufacturing : Proceedings of the IFAC Workshop, ed. P. Brodner, 7-12. New York:Pergamon Press. Torner, M., G. Blide, H... New York:Pergamon Press. Weissbach, H. J. 1986. Design and Implementation Strategies of Manufacturing Control Systems. In Skill Based Automated

  1. Advanced Manufacturing Methods for Systems of Microsystem Nanospacecraft- Status of the Project

    NASA Astrophysics Data System (ADS)

    Plesseria, J. Y.; Corbelli, A.; Masse, C.; Rigo, O.; Pambaguian, L.; Bonvoisin, B.

    2014-06-01

    In the frame of an ESA TRP project, CSL, SIRRIS, ALMASpace and TAS-F associated to evaluate advanced manufacturing methods for application to space hardware.The state of the art of the new manufacturing methods, including additive manufacturing but also advanced bonding, joining and shaping techniques has been reviewed. Then three types of case studies have been developed successively. The first type was a re- manufacture of an existing piece of hardware using advanced techniques to evaluate if there is some potential improvement to be achieved (cost, production time, complexity reduction). The second level was to design and manufacture a part based on the application requirements. The last level was to design and manufacture a part taking into account the subsystem to which it belongs. All case studies have been tested in terms of achieved performances and resistance to the mechanical and thermal environment.

  2. Large-scale cortical networks and cognition.

    PubMed

    Bressler, S L

    1995-03-01

    The well-known parcellation of the mammalian cerebral cortex into a large number of functionally distinct cytoarchitectonic areas presents a problem for understanding the complex cortical integrative functions that underlie cognition. How do cortical areas having unique individual functional properties cooperate to accomplish these complex operations? Do neurons distributed throughout the cerebral cortex act together in large-scale functional assemblages? This review examines the substantial body of evidence supporting the view that complex integrative functions are carried out by large-scale networks of cortical areas. Pathway tracing studies in non-human primates have revealed widely distributed networks of interconnected cortical areas, providing an anatomical substrate for large-scale parallel processing of information in the cerebral cortex. Functional coactivation of multiple cortical areas has been demonstrated by neurophysiological studies in non-human primates and several different cognitive functions have been shown to depend on multiple distributed areas by human neuropsychological studies. Electrophysiological studies on interareal synchronization have provided evidence that active neurons in different cortical areas may become not only coactive, but also functionally interdependent. The computational advantages of synchronization between cortical areas in large-scale networks have been elucidated by studies using artificial neural network models. Recent observations of time-varying multi-areal cortical synchronization suggest that the functional topology of a large-scale cortical network is dynamically reorganized during visuomotor behavior.

  3. Training Welders in Advanced Manufacturing Philosophies Nets Employability

    ERIC Educational Resources Information Center

    Wilson, Kristin

    2011-01-01

    As of September 2010, the U.S. manufacturing sector grew for the 14th consecutive month, leading some economists to speculate that, as with the Great Depression, American manufacturing will lead the economy out of the recession. It is a little bit of good news in a long stream of depressing employment reports. Career and technical educators…

  4. In the fast lane: large-scale bacterial genome engineering.

    PubMed

    Fehér, Tamás; Burland, Valerie; Pósfai, György

    2012-07-31

    The last few years have witnessed rapid progress in bacterial genome engineering. The long-established, standard ways of DNA synthesis, modification, transfer into living cells, and incorporation into genomes have given way to more effective, large-scale, robust genome modification protocols. Expansion of these engineering capabilities is due to several factors. Key advances include: (i) progress in oligonucleotide synthesis and in vitro and in vivo assembly methods, (ii) optimization of recombineering techniques, (iii) introduction of parallel, large-scale, combinatorial, and automated genome modification procedures, and (iv) rapid identification of the modifications by barcode-based analysis and sequencing. Combination of the brute force of these techniques with sophisticated bioinformatic design and modeling opens up new avenues for the analysis of gene functions and cellular network interactions, but also in engineering more effective producer strains. This review presents a summary of recent technological advances in bacterial genome engineering.

  5. Multiscale and Multiphysics Modeling of Additive Manufacturing of Advanced Materials

    NASA Technical Reports Server (NTRS)

    Liou, Frank; Newkirk, Joseph; Fan, Zhiqiang; Sparks, Todd; Chen, Xueyang; Fletcher, Kenneth; Zhang, Jingwei; Zhang, Yunlu; Kumar, Kannan Suresh; Karnati, Sreekar

    2015-01-01

    The objective of this proposed project is to research and develop a prediction tool for advanced additive manufacturing (AAM) processes for advanced materials and develop experimental methods to provide fundamental properties and establish validation data. Aircraft structures and engines demand materials that are stronger, useable at much higher temperatures, provide less acoustic transmission, and enable more aeroelastic tailoring than those currently used. Significant improvements in properties can only be achieved by processing the materials under nonequilibrium conditions, such as AAM processes. AAM processes encompass a class of processes that use a focused heat source to create a melt pool on a substrate. Examples include Electron Beam Freeform Fabrication and Direct Metal Deposition. These types of additive processes enable fabrication of parts directly from CAD drawings. To achieve the desired material properties and geometries of the final structure, assessing the impact of process parameters and predicting optimized conditions with numerical modeling as an effective prediction tool is necessary. The targets for the processing are multiple and at different spatial scales, and the physical phenomena associated occur in multiphysics and multiscale. In this project, the research work has been developed to model AAM processes in a multiscale and multiphysics approach. A macroscale model was developed to investigate the residual stresses and distortion in AAM processes. A sequentially coupled, thermomechanical, finite element model was developed and validated experimentally. The results showed the temperature distribution, residual stress, and deformation within the formed deposits and substrates. A mesoscale model was developed to include heat transfer, phase change with mushy zone, incompressible free surface flow, solute redistribution, and surface tension. Because of excessive computing time needed, a parallel computing approach was also tested. In addition

  6. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  7. Large-scale nanophotonic phased array.

    PubMed

    Sun, Jie; Timurdogan, Erman; Yaacobi, Ami; Hosseini, Ehsan Shah; Watts, Michael R

    2013-01-10

    Electromagnetic phased arrays at radio frequencies are well known and have enabled applications ranging from communications to radar, broadcasting and astronomy. The ability to generate arbitrary radiation patterns with large-scale phased arrays has long been pursued. Although it is extremely expensive and cumbersome to deploy large-scale radiofrequency phased arrays, optical phased arrays have a unique advantage in that the much shorter optical wavelength holds promise for large-scale integration. However, the short optical wavelength also imposes stringent requirements on fabrication. As a consequence, although optical phased arrays have been studied with various platforms and recently with chip-scale nanophotonics, all of the demonstrations so far are restricted to one-dimensional or small-scale two-dimensional arrays. Here we report the demonstration of a large-scale two-dimensional nanophotonic phased array (NPA), in which 64 × 64 (4,096) optical nanoantennas are densely integrated on a silicon chip within a footprint of 576 μm × 576 μm with all of the nanoantennas precisely balanced in power and aligned in phase to generate a designed, sophisticated radiation pattern in the far field. We also show that active phase tunability can be realized in the proposed NPA by demonstrating dynamic beam steering and shaping with an 8 × 8 array. This work demonstrates that a robust design, together with state-of-the-art complementary metal-oxide-semiconductor technology, allows large-scale NPAs to be implemented on compact and inexpensive nanophotonic chips. In turn, this enables arbitrary radiation pattern generation using NPAs and therefore extends the functionalities of phased arrays beyond conventional beam focusing and steering, opening up possibilities for large-scale deployment in applications such as communication, laser detection and ranging, three-dimensional holography and biomedical sciences, to name just a few.

  8. The large-scale distribution of galaxies

    NASA Technical Reports Server (NTRS)

    Geller, Margaret J.

    1989-01-01

    The spatial distribution of galaxies in the universe is characterized on the basis of the six completed strips of the Harvard-Smithsonian Center for Astrophysics redshift-survey extension. The design of the survey is briefly reviewed, and the results are presented graphically. Vast low-density voids similar to the void in Bootes are found, almost completely surrounded by thin sheets of galaxies. Also discussed are the implications of the results for the survey sampling problem, the two-point correlation function of the galaxy distribution, the possibility of detecting large-scale coherent flows, theoretical models of large-scale structure, and the identification of groups and clusters of galaxies.

  9. Space Technology Mission Directorate Game Changing Development Program FY2015 Annual Program Review: Advanced Manufacturing Technology

    NASA Technical Reports Server (NTRS)

    Vickers, John; Fikes, John

    2015-01-01

    The Advance Manufacturing Technology (AMT) Project supports multiple activities within the Administration's National Manufacturing Initiative. A key component of the Initiative is the Advanced Manufacturing National Program Office (AMNPO), which includes participation from all federal agencies involved in U.S. manufacturing. In support of the AMNPO the AMT Project supports building and Growing the National Network for Manufacturing Innovation through a public-private partnership designed to help the industrial community accelerate manufacturing innovation. Integration with other projects/programs and partnerships: STMD (Space Technology Mission Directorate), HEOMD, other Centers; Industry, Academia; OGA's (e.g., DOD, DOE, DOC, USDA, NASA, NSF); Office of Science and Technology Policy, NIST Advanced Manufacturing Program Office; Generate insight within NASA and cross-agency for technology development priorities and investments. Technology Infusion Plan: PC; Potential customer infusion (TDM, HEOMD, SMD, OGA, Industry); Leverage; Collaborate with other Agencies, Industry and Academia; NASA roadmap. Initiatives include: Advanced Near Net Shape Technology Integrally Stiffened Cylinder Process Development (launch vehicles, sounding rockets); Materials Genome; Low Cost Upper Stage-Class Propulsion; Additive Construction with Mobile Emplacement (ACME); National Center for Advanced Manufacturing.

  10. Management of large-scale technology

    NASA Technical Reports Server (NTRS)

    Levine, A.

    1985-01-01

    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  11. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  12. Advanced modeling and simulation to design and manufacture high performance and reliable advanced microelectronics and microsystems.

    SciTech Connect

    Nettleship, Ian (University of Pittsburgh, Pittsburgh, PA); Hinklin, Thomas; Holcomb, David Joseph; Tandon, Rajan; Arguello, Jose Guadalupe, Jr.; Dempsey, James Franklin; Ewsuk, Kevin Gregory; Neilsen, Michael K.; Lanagan, Michael (Pennsylvania State University, University Park, PA)

    2007-07-01

    An interdisciplinary team of scientists and engineers having broad expertise in materials processing and properties, materials characterization, and computational mechanics was assembled to develop science-based modeling/simulation technology to design and reproducibly manufacture high performance and reliable, complex microelectronics and microsystems. The team's efforts focused on defining and developing a science-based infrastructure to enable predictive compaction, sintering, stress, and thermomechanical modeling in ''real systems'', including: (1) developing techniques to and determining materials properties and constitutive behavior required for modeling; (2) developing new, improved/updated models and modeling capabilities, (3) ensuring that models are representative of the physical phenomena being simulated; and (4) assessing existing modeling capabilities to identify advances necessary to facilitate the practical application of Sandia's predictive modeling technology.

  13. Large-scale quantization from local correlations in space plasmas

    NASA Astrophysics Data System (ADS)

    Livadiotis, George; McComas, David J.

    2014-05-01

    This study examines the large-scale quantization that can characterize the phase space of certain physical systems. Plasmas are such systems where large-scale quantization, ħ*, is caused by Debye shielding that structures correlations between particles. The value of ħ* is constant—some 12 orders of magnitude larger than the Planck constant—across a wide range of space plasmas, from the solar wind in the inner heliosphere to the distant plasma in the inner heliosheath and the local interstellar medium. This paper develops the foundation and advances the understanding of the concept of plasma quantization; in particular, we (i) show the analogy of plasma to Planck quantization, (ii) show the key points of plasma quantization, (iii) construct some basic quantum mechanical concepts for the large-scale plasma quantization, (iv) investigate the correlation between plasma parameters that implies plasma quantization, when it is approximated by a relation between the magnetosonic energy and the plasma frequency, (v) analyze typical space plasmas throughout the heliosphere and show the constancy of plasma quantization over many orders of magnitude in plasma parameters, (vi) analyze Advanced Composition Explorer (ACE) solar wind measurements to develop another measurement of the value of ħ*, and (vii) apply plasma quantization to derive unknown plasma parameters when some key observable is missing.

  14. Transfer of advanced manufacturing technologies to eastern Kentucky industries

    SciTech Connect

    Gillies, J.A.; Kruzich, R.

    1988-05-01

    This study concludes that there are opportunities to provide assistance in the adoption of manufacturing technologies for small- and medium-sized firms in eastern Kentucky. However, the new markets created by Toyota are not adequate to justify a directed technology transfer program targeting the auto supply industry in eastern Kentucky because supplier markets have been determined for some time, and manufacturers in eastern Kentucky were not competitive in this early selection process. The results of the study strongly reinforce a reorientation of state business-assistance programs. The study also concludes that the quality and quantity of available labor is a pervasive problem in eastern Kentucky and has particular relevance as the economy changes. The study also investigated what type of technology-transfer programs would be appropriate to assist manufacturing firms in eastern Kentucky and if there were a critical number of firms to make such a program feasible.

  15. Cost analysis of advanced turbine blade manufacturing processes

    NASA Technical Reports Server (NTRS)

    Barth, C. F.; Blake, D. E.; Stelson, T. S.

    1977-01-01

    A rigorous analysis was conducted to estimate relative manufacturing costs for high technology gas turbine blades prepared by three candidate materials process systems. The manufacturing costs for the same turbine blade configuration of directionally solidified eutectic alloy, an oxide dispersion strengthened superalloy, and a fiber reinforced superalloy were compared on a relative basis to the costs of the same blade currently in production utilizing the directional solidification process. An analytical process cost model was developed to quantitatively perform the cost comparisons. The impact of individual process yield factors on costs was also assessed as well as effects of process parameters, raw materials, labor rates and consumable items.

  16. Regional Advanced Manufacturing Academy: An Agent of Change

    ERIC Educational Resources Information Center

    Schmeling, Daniel M.; Rose, Kevin

    2010-01-01

    Three Northeast Texas community colleges put aside service delivery areas and matters of "turf" to create Centers of Excellence that provided training throughout a nine county area. This consortium; along with 14 manufacturers, seven economic development corporations, and the regional workforce board, led the change in training a highly…

  17. Innovation Training within the Australian Advanced Manufacturing Industry

    ERIC Educational Resources Information Center

    Donovan, Jerome Denis; Maritz, Alex; McLellan, Andrew

    2013-01-01

    Innovation has emerged as a core driver for the future profitability and success of the manufacturing sector, and increasingly both governments and the private sector are examining ways to support the development of innovation capabilities within organisations. In this research, we have evaluated a government-funded innovation training course…

  18. Impact of Parallel Computing on Large Scale Aeroelastic Computations

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    Aeroelasticity is computationally one of the most intensive fields in aerospace engineering. Though over the last three decades the computational speed of supercomputers have substantially increased, they are still inadequate for large scale aeroelastic computations using high fidelity flow and structural equations. In addition to reaching a saturation in computational speed because of changes in economics, computer manufactures are stopping the manufacturing of mainframe type supercomputers. This has led computational aeroelasticians to face the gigantic task of finding alternate approaches for fulfilling their needs. The alternate path to over come speed and availability limitations of mainframe type supercomputers is to use parallel computers. During this decade several different architectures have evolved. In FY92 the US Government started the High Performance Computing and Communication (HPCC) program. As a participant in this program NASA developed several parallel computational tools for aeroelastic applications. This talk describes the impact of those application tools on high fidelity based multidisciplinary analysis.

  19. Condition Monitoring of Large-Scale Facilities

    NASA Technical Reports Server (NTRS)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  20. Advanced excimer laser technologies enable green semiconductor manufacturing

    NASA Astrophysics Data System (ADS)

    Fukuda, Hitomi; Yoo, Youngsun; Minegishi, Yuji; Hisanaga, Naoto; Enami, Tatsuo

    2014-03-01

    "Green" has fast become an important and pervasive topic throughout many industries worldwide. Many companies, especially in the manufacturing industries, have taken steps to integrate green initiatives into their high-level corporate strategies. Governments have also been active in implementing various initiatives designed to increase corporate responsibility and accountability towards environmental issues. In the semiconductor manufacturing industry, there are growing concerns over future environmental impact as enormous fabs expand and new generation of equipments become larger and more powerful. To address these concerns, Gigaphoton has implemented various green initiatives for many years under the EcoPhoton™ program. The objective of this program is to drive innovations in technology and services that enable manufacturers to significantly reduce both the financial and environmental "green cost" of laser operations in high-volume manufacturing environment (HVM) - primarily focusing on electricity, gas and heat management costs. One example of such innovation is Gigaphoton's Injection-Lock system, which reduces electricity and gas utilization costs of the laser by up to 50%. Furthermore, to support the industry's transition from 300mm to the next generation 450mm wafers, technologies are being developed to create lasers that offer double the output power from 60W to 120W, but reducing electricity and gas consumption by another 50%. This means that the efficiency of lasers can be improve by up to 4 times in 450mm wafer production environments. Other future innovations include the introduction of totally Heliumfree Excimer lasers that utilize Nitrogen gas as its replacement for optical module purging. This paper discusses these and other innovations by Gigaphoton to enable green manufacturing.

  1. Large-scale instabilities of helical flows

    NASA Astrophysics Data System (ADS)

    Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne

    2016-10-01

    Large-scale hydrodynamic instabilities of periodic helical flows of a given wave number K are investigated using three-dimensional Floquet numerical computations. In the Floquet formalism the unstable field is expanded in modes of different spacial periodicity. This allows us (i) to clearly distinguish large from small scale instabilities and (ii) to study modes of wave number q of arbitrarily large-scale separation q ≪K . Different flows are examined including flows that exhibit small-scale turbulence. The growth rate σ of the most unstable mode is measured as a function of the scale separation q /K ≪1 and the Reynolds number Re. It is shown that the growth rate follows the scaling σ ∝q if an AKA effect [Frisch et al., Physica D: Nonlinear Phenomena 28, 382 (1987), 10.1016/0167-2789(87)90026-1] is present or a negative eddy viscosity scaling σ ∝q2 in its absence. This holds both for the Re≪1 regime where previously derived asymptotic results are verified but also for Re=O (1 ) that is beyond their range of validity. Furthermore, for values of Re above a critical value ReSc beyond which small-scale instabilities are present, the growth rate becomes independent of q and the energy of the perturbation at large scales decreases with scale separation. The nonlinear behavior of these large-scale instabilities is also examined in the nonlinear regime where the largest scales of the system are found to be the most dominant energetically. These results are interpreted by low-order models.

  2. Economically viable large-scale hydrogen liquefaction

    NASA Astrophysics Data System (ADS)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  3. Large-Scale Visual Data Analysis

    NASA Astrophysics Data System (ADS)

    Johnson, Chris

    2014-04-01

    Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

  4. Advanced manufacturing of SIMOX for low power electronics

    NASA Astrophysics Data System (ADS)

    Alles, Michael; Krull, Wade

    1996-04-01

    Silicon-on-insulator (SOI) has emerged as a key technology for low power electronics. The merits of SOI technology have been demonstrated, and are gaining acceptance in the semiconductor industry. In order for the SOI approach to be viable, several factors must converge, including the availability of SOI substrates in sufficient quantity, of acceptable quality, and at a competitive price. This work describes developments in SIMOX manufacturing technology and summarizes progress in each of these areas.

  5. A novel precision face grinder for advanced optic manufacture

    NASA Astrophysics Data System (ADS)

    Guo, Y.; Peng, Y.; Wang, Z.; Yang, W.; Bi, G.; Ke, X.; Lin, X.

    2010-10-01

    In this paper, a large-scale NC precision face grinding machine is developed. This grinding machine can be used to the precision machining of brittle materials. The base and the machine body are independent and the whole structure is configured as a "T" type. The vertical column is seat onto the machine body at the middle center part through a double of precision lead rails. The grinding wheel is driven with a hydraulic dynamic and static spindle. The worktable is supported with a novel split thin film throttle hydrostatic lead rails. Each of motion-axis of the grinding machine is equipped with a Heidenhain absolute linear encoder, and then a closed feedback control system is formed with the adopted Fanuc 0i-MD NC system. The machine is capable of machining extremely flat surfaces on workpiece up to 800mmx600mm. The maximums load bearing of the work table is 620Kg. Furthermore, the roughness of the machined surfaces should be smooth (Ra<50nm-100nm), and the form accuracy less than 2μm (+/-1μm)/200x200mm. After the assembly and debugging of the surface grinding machine, the worktable surface has been self-ground with 60# grinding wheel and the form accuracy is 3μm/600mm×800mm. Then the grinding experiment was conduct on a BK7 flat optic glass element (400mmx250mm) and a ceramic disc (Φ100mm) with 60# grinding wheel, and the measuring results show the surface roughness and the form accuracy of the optic glass device are 0.07μm and 1.56μm/200x200mm, and these of the ceramic disc are 0.52μm and 1.28μm respectively.

  6. Large-scale data mining pilot project in human genome

    SciTech Connect

    Musick, R.; Fidelis, R.; Slezak, T.

    1997-05-01

    This whitepaper briefly describes a new, aggressive effort in large- scale data Livermore National Labs. The implications of `large- scale` will be clarified Section. In the short term, this effort will focus on several @ssion-critical questions of Genome project. We will adapt current data mining techniques to the Genome domain, to quantify the accuracy of inference results, and lay the groundwork for a more extensive effort in large-scale data mining. A major aspect of the approach is that we will be fully-staffed data warehousing effort in the human Genome area. The long term goal is strong applications- oriented research program in large-@e data mining. The tools, skill set gained will be directly applicable to a wide spectrum of tasks involving a for large spatial and multidimensional data. This includes applications in ensuring non-proliferation, stockpile stewardship, enabling Global Ecology (Materials Database Industrial Ecology), advancing the Biosciences (Human Genome Project), and supporting data for others (Battlefield Management, Health Care).

  7. Analysis of the influence of advanced materials for aerospace products R&D and manufacturing cost

    NASA Astrophysics Data System (ADS)

    Shen, A. W.; Guo, J. L.; Wang, Z. J.

    2015-12-01

    In this paper, we pointed out the deficiency of traditional cost estimation model about aerospace products Research & Development (R&D) and manufacturing based on analyzing the widely use of advanced materials in aviation products. Then we put up with the estimating formulas of cost factor, which representing the influences of advanced materials on the labor cost rate and manufacturing materials cost rate. The values ranges of the common advanced materials such as composite materials, titanium alloy are present in the labor and materials two aspects. Finally, we estimate the R&D and manufacturing cost of F/A-18, F/A- 22, B-1B and B-2 aircraft based on the common DAPCA IV model and the modified model proposed by this paper. The calculation results show that the calculation precision improved greatly by the proposed method which considering advanced materials. So we can know the proposed method is scientific and reasonable.

  8. Mask manufacturing of advanced technology designs using multi-beam lithography (part 2)

    NASA Astrophysics Data System (ADS)

    Green, Michael; Ham, Young; Dillon, Brian; Kasprowicz, Bryan; Hur, Ik Boum; Park, Joong Hee; Choi, Yohan; McMurran, Jeff; Kamberian, Henry; Chalom, Daniel; Klikovits, Jan; Jurkovic, Michal; Hudek, Peter

    2016-09-01

    As optical lithography is extended into 10nm and below nodes, advanced designs are becoming a key challenge for mask manufacturers. Techniques including advanced optical proximity correction (OPC) and Inverse Lithography Technology (ILT) result in structures that pose a range of issues across the mask manufacturing process. Among the new challenges are continued shrinking sub-resolution assist features (SRAFs), curvilinear SRAFs, and other complex mask geometries that are counter-intuitive relative to the desired wafer pattern. Considerable capability improvements over current mask making methods are necessary to meet the new requirements particularly regarding minimum feature resolution and pattern fidelity. Advanced processes using the IMS Multi-beam Mask Writer (MBMW) are feasible solutions to these coming challenges. In this paper, Part 2 of our study, we further characterize an MBMW process for 10nm and below logic node mask manufacturing including advanced pattern analysis and write time demonstration.

  9. Part A - Advanced turbine systems. Part B - Materials/manufacturing element of the Advanced Turbine Systems Program

    SciTech Connect

    Karnitz, M.A.

    1996-06-01

    The DOE Offices of Fossil Energy and Energy Efficiency and Renewable Energy have initiated a program to develop advanced turbine systems for power generation. The objective of the Advanced Turbine Systems (ATS) Program is to develop ultra-high efficiency, environmentally superior, and cost competitive gas turbine systems for utility and industrial applications. One of the supporting elements of the ATS Program is the Materials/Manufacturing Technologies Task. The objective of this element is to address the critical materials and manufacturing issues for both industrial and utility gas turbines.

  10. Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect

    Willcox, Karen; Marzouk, Youssef

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their

  11. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  12. Large-Scale PV Integration Study

    SciTech Connect

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  13. What is a large-scale dynamo?

    NASA Astrophysics Data System (ADS)

    Nigro, G.; Pongkitiwanichakul, P.; Cattaneo, F.; Tobias, S. M.

    2017-01-01

    We consider kinematic dynamo action in a sheared helical flow at moderate to high values of the magnetic Reynolds number (Rm). We find exponentially growing solutions which, for large enough shear, take the form of a coherent part embedded in incoherent fluctuations. We argue that at large Rm large-scale dynamo action should be identified by the presence of structures coherent in time, rather than those at large spatial scales. We further argue that although the growth rate is determined by small-scale processes, the period of the coherent structures is set by mean-field considerations.

  14. Neutrinos and large-scale structure

    SciTech Connect

    Eisenstein, Daniel J.

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  15. Colloquium: Large scale simulations on GPU clusters

    NASA Astrophysics Data System (ADS)

    Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano

    2015-06-01

    Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.

  16. Large scale phononic metamaterials for seismic isolation

    SciTech Connect

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  17. Large-scale Heterogeneous Network Data Analysis

    DTIC Science & Technology

    2012-07-31

    Data for Multi-Player Influence Maximization on Social Networks.” KDD 2012 (Demo).  Po-Tzu Chang , Yen-Chieh Huang, Cheng-Lun Yang, Shou-De Lin, Pu...Jen Cheng. “Learning-Based Time-Sensitive Re-Ranking for Web Search.” SIGIR 2012 (poster)  Hung -Che Lai, Cheng-Te Li, Yi-Chen Lo, and Shou-De Lin...Exploiting and Evaluating MapReduce for Large-Scale Graph Mining.” ASONAM 2012 (Full, 16% acceptance ratio).  Hsun-Ping Hsieh , Cheng-Te Li, and Shou

  18. Large-scale Intelligent Transporation Systems simulation

    SciTech Connect

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  19. Local gravity and large-scale structure

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.

    1990-01-01

    The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.

  20. Large-scale Globally Propagating Coronal Waves.

    PubMed

    Warmuth, Alexander

    Large-scale, globally propagating wave-like disturbances have been observed in the solar chromosphere and by inference in the corona since the 1960s. However, detailed analysis of these phenomena has only been conducted since the late 1990s. This was prompted by the availability of high-cadence coronal imaging data from numerous spaced-based instruments, which routinely show spectacular globally propagating bright fronts. Coronal waves, as these perturbations are usually referred to, have now been observed in a wide range of spectral channels, yielding a wealth of information. Many findings have supported the "classical" interpretation of the disturbances: fast-mode MHD waves or shocks that are propagating in the solar corona. However, observations that seemed inconsistent with this picture have stimulated the development of alternative models in which "pseudo waves" are generated by magnetic reconfiguration in the framework of an expanding coronal mass ejection. This has resulted in a vigorous debate on the physical nature of these disturbances. This review focuses on demonstrating how the numerous observational findings of the last one and a half decades can be used to constrain our models of large-scale coronal waves, and how a coherent physical understanding of these disturbances is finally emerging.

  1. Materials/manufacturing support element for the Advanced Turbine Systems Program

    SciTech Connect

    Karnitz, M.A.; Hoffman, E.E.; Parks, W.P.

    1994-12-31

    In 1993, DOE initiated a program to develop advanced gas turbines for power generation in utility and industrial applications. A materials/manufacturing plan was developed in several stages with input from gas turbine manufacturers, materials suppliers, universities, and government laboratories. This plan was developed by a small advanced materials and turbine technology team over a 6-month period. The technology plan calls for initiation of several high priority projects in FY 1995. The technical program for the materials/manufacturing element focuses on generic materials issues, components, and manufacturing processes. Categories include coatings and process development, turbine airfoil development, ceramics adaptation, directional solidification and single crystal airfoils manufactoring technology, materials characterization, catalytic combustor materials, and technology information exchange.

  2. Large-scale smart passive system for civil engineering applications

    NASA Astrophysics Data System (ADS)

    Jung, Hyung-Jo; Jang, Dong-Doo; Lee, Heon-Jae; Cho, Sang-Won

    2008-03-01

    The smart passive system consisting of a magnetorheological (MR) damper and an electromagnetic induction (EMI) part has been recently proposed. An EMI part can generate the input current for an MR damper from vibration of a structure according to Faraday's law of electromagnetic induction. The control performance of the smart passive system has been demonstrated mainly by numerical simulations. It was verified from the numerical results that the system could be effective to reduce the structural responses in the cases of civil engineering structures such as buildings and bridges. On the other hand, the experimental validation of the system is not sufficiently conducted yet. In this paper, the feasibility of the smart passive system to real-scale structures is investigated. To do this, the large-scale smart passive system is designed, manufactured, and tested. The system consists of the large-capacity MR damper, which has a maximum force level of approximately +/-10,000N, a maximum stroke level of +/-35mm and the maximum current level of 3 A, and the large-scale EMI part, which is designed to generate sufficient induced current for the damper. The applicability of the smart passive system to large real-scale structures is examined through a series of shaking table tests. The magnitudes of the induced current of the EMI part with various sinusoidal excitation inputs are measured. According to the test results, the large-scale EMI part shows the possibility that it could generate the sufficient current or power for changing the damping characteristics of the large-capacity MR damper.

  3. Advanced Material Intelligent Processing Center: Next Generation Scalable Lean Manufacturing

    DTIC Science & Technology

    2012-09-04

    machines and have made significant advances to automated tape laying (ATL) and automated fiber placement (AFP) technologies. Companies are moving...beyond standard thermoplastic and thermoset prepregs and are looking at placing 00A prepregs as well as dry fabrics. Today. Automated Tape Laying (ATL...References [1] Michael N. Grimshaw, " Automated Tape Laying ." in ASM Handbook Vol. 21 Composites.. ASM International, 2001. [2] Obaid Younossi. Michael

  4. Emerging technology: A key enabler for modernizing pharmaceutical manufacturing and advancing product quality.

    PubMed

    O'Connor, Thomas F; Yu, Lawrence X; Lee, Sau L

    2016-07-25

    Issues in product quality have produced recalls and caused drug shortages in United States (U.S.) in the past few years. These quality issues were often due to outdated manufacturing technologies and equipment as well as lack of an effective quality management system. To ensure consistent supply of safe, effective and high-quality drug products available to the patients, the U.S. Food and Drug Administration (FDA) supports modernizing pharmaceutical manufacturing for improvements in product quality. Specifically, five new initiatives are proposed here to achieve this goal. They include: (i) advancing regulatory science for pharmaceutical manufacturing; (ii) establishing a public-private institute for pharmaceutical manufacturing innovation; (iii) creating incentives for investment in the technological upgrade of manufacturing processes and facilities; (iv) leveraging external expertise for regulatory quality assessment of emerging technologies; and (v) promoting the international harmonization of approaches for expediting the global adoption of emerging technologies.

  5. Mask manufacturing of advanced technology designs using multi-beam lithography (Part 1)

    NASA Astrophysics Data System (ADS)

    Green, Michael; Ham, Young; Dillon, Brian; Kasprowicz, Bryan; Hur, Ik Boum; Park, Joong Hee; Choi, Yohan; McMurran, Jeff; Kamberian, Henry; Chalom, Daniel; Klikovits, Jan; Jurkovic, Michal; Hudek, Peter

    2016-10-01

    As optical lithography is extended into 10nm and below nodes, advanced designs are becoming a key challenge for mask manufacturers. Techniques including advanced Optical Proximity Correction (OPC) and Inverse Lithography Technology (ILT) result in structures that pose a range of issues across the mask manufacturing process. Among the new challenges are continued shrinking Sub-Resolution Assist Features (SRAFs), curvilinear SRAFs, and other complex mask geometries that are counter-intuitive relative to the desired wafer pattern. Considerable capability improvements over current mask making methods are necessary to meet the new requirements particularly regarding minimum feature resolution and pattern fidelity. Advanced processes using the IMS Multi-beam Mask Writer (MBMW) are feasible solutions to these coming challenges. In this paper, we study one such process, characterizing mask manufacturing capability of 10nm and below structures with particular focus on minimum resolution and pattern fidelity.

  6. Foundational perspectives on causality in large-scale brain networks.

    PubMed

    Mannino, Michael; Bressler, Steven L

    2015-12-01

    likelihood that a change in the activity of one neuronal population affects the activity in another. We argue that these measures access the inherently probabilistic nature of causal influences in the brain, and are thus better suited for large-scale brain network analysis than are DC-based measures. Our work is consistent with recent advances in the philosophical study of probabilistic causality, which originated from inherent conceptual problems with deterministic regularity theories. It also resonates with concepts of stochasticity that were involved in establishing modern physics. In summary, we argue that probabilistic causality is a conceptually appropriate foundation for describing neural causality in the brain.

  7. Foundational perspectives on causality in large-scale brain networks

    NASA Astrophysics Data System (ADS)

    Mannino, Michael; Bressler, Steven L.

    2015-12-01

    likelihood that a change in the activity of one neuronal population affects the activity in another. We argue that these measures access the inherently probabilistic nature of causal influences in the brain, and are thus better suited for large-scale brain network analysis than are DC-based measures. Our work is consistent with recent advances in the philosophical study of probabilistic causality, which originated from inherent conceptual problems with deterministic regularity theories. It also resonates with concepts of stochasticity that were involved in establishing modern physics. In summary, we argue that probabilistic causality is a conceptually appropriate foundation for describing neural causality in the brain.

  8. Engineering management of large scale systems

    NASA Technical Reports Server (NTRS)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  9. Primer design for large scale sequencing.

    PubMed

    Haas, S; Vingron, M; Poustka, A; Wiemann, S

    1998-06-15

    We have developed PRIDE, a primer design program that automatically designs primers in single contigs or whole sequencing projects to extend the already known sequence and to double strand single-stranded regions. The program is fully integrated into the Staden package (GAP4) and accessible with a graphical user interface. PRIDE uses a fuzzy logic-based system to calculate primer qualities. The computational performance of PRIDE is enhanced by using suffix trees to store the huge amount of data being produced. A test set of 110 sequencing primers and 11 PCR primer pairs has been designed on genomic templates, cDNAs and sequences containing repetitive elements to analyze PRIDE's success rate. The high performance of PRIDE, combined with its minimal requirement of user interaction and its fast algorithm, make this program useful for the large scale design of primers, especially in large sequencing projects.

  10. Large scale preparation of pure phycobiliproteins.

    PubMed

    Padgett, M P; Krogmann, D W

    1987-01-01

    This paper describes simple procedures for the purification of large amounts of phycocyanin and allophycocyanin from the cyanobacterium Microcystis aeruginosa. A homogeneous natural bloom of this organism provided hundreds of kilograms of cells. Large samples of cells were broken by freezing and thawing. Repeated extraction of the broken cells with distilled water released phycocyanin first, then allophycocyanin, and provides supporting evidence for the current models of phycobilisome structure. The very low ionic strength of the aqueous extracts allowed allophycocyanin release in a particulate form so that this protein could be easily concentrated by centrifugation. Other proteins in the extract were enriched and concentrated by large scale membrane filtration. The biliproteins were purified to homogeneity by chromatography on DEAE cellulose. Purity was established by HPLC and by N-terminal amino acid sequence analysis. The proteins were examined for stability at various pHs and exposures to visible light.

  11. Large-Scale Organization of Glycosylation Networks

    NASA Astrophysics Data System (ADS)

    Kim, Pan-Jun; Lee, Dong-Yup; Jeong, Hawoong

    2009-03-01

    Glycosylation is a highly complex process to produce a diverse repertoire of cellular glycans that are frequently attached to proteins and lipids. Glycans participate in fundamental biological processes including molecular trafficking and clearance, cell proliferation and apoptosis, developmental biology, immune response, and pathogenesis. N-linked glycans found on proteins are formed by sequential attachments of monosaccharides with the help of a relatively small number of enzymes. Many of these enzymes can accept multiple N-linked glycans as substrates, thus generating a large number of glycan intermediates and their intermingled pathways. Motivated by the quantitative methods developed in complex network research, we investigate the large-scale organization of such N-glycosylation pathways in a mammalian cell. The uncovered results give the experimentally-testable predictions for glycosylation process, and can be applied to the engineering of therapeutic glycoproteins.

  12. Efficient, large scale separation of coal macerals

    SciTech Connect

    Dyrkacz, G.R.; Bloomquist, C.A.A.

    1988-01-01

    The authors believe that the separation of macerals by continuous flow centrifugation offers a simple technique for the large scale separation of macerals. With relatively little cost (/approximately/ $10K), it provides an opportunity for obtaining quite pure maceral fractions. Although they have not completely worked out all the nuances of this separation system, they believe that the problems they have indicated can be minimized to pose only minor inconvenience. It cannot be said that this system completely bypasses the disagreeable tedium or time involved in separating macerals, nor will it by itself overcome the mental inertia required to make maceral separation an accepted necessary fact in fundamental coal science. However, they find their particular brand of continuous flow centrifugation is considerably faster than sink/float separation, can provide a good quality product with even one separation cycle, and permits the handling of more material than a conventional sink/float centrifuge separation.

  13. Large-scale optimization of neuron arbors

    NASA Astrophysics Data System (ADS)

    Cherniak, Christopher; Changizi, Mark; Won Kang, Du

    1999-05-01

    At the global as well as local scales, some of the geometry of types of neuron arbors-both dendrites and axons-appears to be self-organizing: Their morphogenesis behaves like flowing water, that is, fluid dynamically; waterflow in branching networks in turn acts like a tree composed of cords under tension, that is, vector mechanically. Branch diameters and angles and junction sites conform significantly to this model. The result is that such neuron tree samples globally minimize their total volume-rather than, for example, surface area or branch length. In addition, the arbors perform well at generating the cheapest topology interconnecting their terminals: their large-scale layouts are among the best of all such possible connecting patterns, approaching 5% of optimum. This model also applies comparably to arterial and river networks.

  14. Grid sensitivity capability for large scale structures

    NASA Technical Reports Server (NTRS)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  15. Large Scale Quantum Simulations of Nuclear Pasta

    NASA Astrophysics Data System (ADS)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 < ρ < 0 . 10 fm-3, proton fractions 0 . 05

  16. Primer design for large scale sequencing.

    PubMed Central

    Haas, S; Vingron, M; Poustka, A; Wiemann, S

    1998-01-01

    We have developed PRIDE, a primer design program that automatically designs primers in single contigs or whole sequencing projects to extend the already known sequence and to double strand single-stranded regions. The program is fully integrated into the Staden package (GAP4) and accessible with a graphical user interface. PRIDE uses a fuzzy logic-based system to calculate primer qualities. The computational performance of PRIDE is enhanced by using suffix trees to store the huge amount of data being produced. A test set of 110 sequencing primers and 11 PCR primer pairs has been designed on genomic templates, cDNAs and sequences containing repetitive elements to analyze PRIDE's success rate. The high performance of PRIDE, combined with its minimal requirement of user interaction and its fast algorithm, make this program useful for the large scale design of primers, especially in large sequencing projects. PMID:9611248

  17. Large scale study of tooth enamel

    SciTech Connect

    Bodart, F.; Deconninck, G.; Martin, M.Th.

    1981-04-01

    Human tooth enamel contains traces of foreign elements. The presence of these elements is related to the history and the environment of the human body and can be considered as the signature of perturbations which occur during the growth of a tooth. A map of the distribution of these traces on a large scale sample of the population will constitute a reference for further investigations of environmental effects. One hundred eighty samples of teeth were first analysed using PIXE, backscattering and nuclear reaction techniques. The results were analysed using statistical methods. Correlations between O, F, Na, P, Ca, Mn, Fe, Cu, Zn, Pb and Sr were observed and cluster analysis was in progress. The techniques described in the present work have been developed in order to establish a method for the exploration of very large samples of the Belgian population.

  18. Modeling the Internet's large-scale topology

    PubMed Central

    Yook, Soon-Hyung; Jeong, Hawoong; Barabási, Albert-László

    2002-01-01

    Network generators that capture the Internet's large-scale topology are crucial for the development of efficient routing protocols and modeling Internet traffic. Our ability to design realistic generators is limited by the incomplete understanding of the fundamental driving forces that affect the Internet's evolution. By combining several independent databases capturing the time evolution, topology, and physical layout of the Internet, we identify the universal mechanisms that shape the Internet's router and autonomous system level topology. We find that the physical layout of nodes form a fractal set, determined by population density patterns around the globe. The placement of links is driven by competition between preferential attachment and linear distance dependence, a marked departure from the currently used exponential laws. The universal parameters that we extract significantly restrict the class of potentially correct Internet models and indicate that the networks created by all available topology generators are fundamentally different from the current Internet. PMID:12368484

  19. Voids in the Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    El-Ad, Hagai; Piran, Tsvi

    1997-12-01

    Voids are the most prominent feature of the large-scale structure of the universe. Still, their incorporation into quantitative analysis of it has been relatively recent, owing essentially to the lack of an objective tool to identify the voids and to quantify them. To overcome this, we present here the VOID FINDER algorithm, a novel tool for objectively quantifying voids in the galaxy distribution. The algorithm first classifies galaxies as either wall galaxies or field galaxies. Then, it identifies voids in the wall-galaxy distribution. Voids are defined as continuous volumes that do not contain any wall galaxies. The voids must be thicker than an adjustable limit, which is refined in successive iterations. In this way, we identify the same regions that would be recognized as voids by the eye. Small breaches in the walls are ignored, avoiding artificial connections between neighboring voids. We test the algorithm using Voronoi tesselations. By appropriate scaling of the parameters with the selection function, we apply it to two redshift surveys, the dense SSRS2 and the full-sky IRAS 1.2 Jy. Both surveys show similar properties: ~50% of the volume is filled by voids. The voids have a scale of at least 40 h-1 Mpc and an average -0.9 underdensity. Faint galaxies do not fill the voids, but they do populate them more than bright ones. These results suggest that both optically and IRAS-selected galaxies delineate the same large-scale structure. Comparison with the recovered mass distribution further suggests that the observed voids in the galaxy distribution correspond well to underdense regions in the mass distribution. This confirms the gravitational origin of the voids.

  20. Supporting large-scale computational science

    SciTech Connect

    Musick, R

    1998-10-01

    A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.

  1. Improving Recent Large-Scale Pulsar Surveys

    NASA Astrophysics Data System (ADS)

    Cardoso, Rogerio Fernando; Ransom, S.

    2011-01-01

    Pulsars are unique in that they act as celestial laboratories for precise tests of gravity and other extreme physics (Kramer 2004). There are approximately 2000 known pulsars today, which is less than ten percent of pulsars in the Milky Way according to theoretical models (Lorimer 2004). Out of these 2000 known pulsars, approximately ten percent are known millisecond pulsars, objects used for their period stability for detailed physics tests and searches for gravitational radiation (Lorimer 2008). As the field and instrumentation progress, pulsar astronomers attempt to overcome observational biases and detect new pulsars, consequently discovering new millisecond pulsars. We attempt to improve large scale pulsar surveys by examining three recent pulsar surveys. The first, the Green Bank Telescope 350MHz Drift Scan, a low frequency isotropic survey of the northern sky, has yielded a large number of candidates that were visually inspected and identified, resulting in over 34.000 thousands candidates viewed, dozens of detections of known pulsars, and the discovery of a new low-flux pulsar, PSRJ1911+22. The second, the PALFA survey, is a high frequency survey of the galactic plane with the Arecibo telescope. We created a processing pipeline for the PALFA survey at the National Radio Astronomy Observatory in Charlottesville- VA, in addition to making needed modifications upon advice from the PALFA consortium. The third survey examined is a new GBT 820MHz survey devoted to find new millisecond pulsars by observing the target-rich environment of unidentified sources in the FERMI LAT catalogue. By approaching these three pulsar surveys at different stages, we seek to improve the success rates of large scale surveys, and hence the possibility for ground-breaking work in both basic physics and astrophysics.

  2. Supporting large-scale computational science

    SciTech Connect

    Musick, R., LLNL

    1998-02-19

    Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.

  3. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  4. Large-scale sequential quadratic programming algorithms

    SciTech Connect

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  5. Enhanced bio-manufacturing through advanced multivariate statistical technologies.

    PubMed

    Martin, E B; Morris, A J

    2002-11-13

    The paper describes the interrogation of data, from a reaction vessel producing an active pharmaceutical ingredient (API), using advanced multivariate statistical techniques. Due to the limited number of batches available, data augmentation was used to increase the number of batches thereby enabling the extraction of more subtle process behaviour from the data. A second methodology investigated was that of multi-group modelling. This allowed between cluster variability to be removed, thus allowing attention to focus on within process variability. The paper describes how the different approaches enabled the realisation of a better understanding of the factors causing the onset of an impurity formation to be obtained as well demonstrating the power of multivariate statistical data analysis techniques to provide an enhanced understanding of the process.

  6. Advancing Measurement Science to Assess Monitoring, Diagnostics, and Prognostics for Manufacturing Robotics

    PubMed Central

    Qiao, Guixiu; Weiss, Brian A.

    2016-01-01

    Unexpected equipment downtime is a ‘pain point’ for manufacturers, especially in that this event usually translates to financial losses. To minimize this pain point, manufacturers are developing new health monitoring, diagnostic, prognostic, and maintenance (collectively known as prognostics and health management (PHM)) techniques to advance the state-of-the-art in their maintenance strategies. The manufacturing community has a wide-range of needs with respect to the advancement and integration of PHM technologies to enhance manufacturing robotic system capabilities. Numerous researchers, including personnel from the National Institute of Standards and Technology (NIST), have identified a broad landscape of barriers and challenges to advancing PHM technologies. One such challenge is the verification and validation of PHM technology through the development of performance metrics, test methods, reference datasets, and supporting tools. Besides documenting and presenting the research landscape, NIST personnel are actively researching PHM for robotics to promote the development of innovative sensing technology and prognostic decision algorithms and to produce a positional accuracy test method that emphasizes the identification of static and dynamic positional accuracy. The test method development will provide manufacturers with a methodology that will allow them to quickly assess the positional health of their robot systems along with supporting the verification and validation of PHM techniques for the robot system. PMID:28058172

  7. Advancing Measurement Science to Assess Monitoring, Diagnostics, and Prognostics for Manufacturing Robotics.

    PubMed

    Qiao, Guixiu; Weiss, Brian A

    2016-01-01

    Unexpected equipment downtime is a 'pain point' for manufacturers, especially in that this event usually translates to financial losses. To minimize this pain point, manufacturers are developing new health monitoring, diagnostic, prognostic, and maintenance (collectively known as prognostics and health management (PHM)) techniques to advance the state-of-the-art in their maintenance strategies. The manufacturing community has a wide-range of needs with respect to the advancement and integration of PHM technologies to enhance manufacturing robotic system capabilities. Numerous researchers, including personnel from the National Institute of Standards and Technology (NIST), have identified a broad landscape of barriers and challenges to advancing PHM technologies. One such challenge is the verification and validation of PHM technology through the development of performance metrics, test methods, reference datasets, and supporting tools. Besides documenting and presenting the research landscape, NIST personnel are actively researching PHM for robotics to promote the development of innovative sensing technology and prognostic decision algorithms and to produce a positional accuracy test method that emphasizes the identification of static and dynamic positional accuracy. The test method development will provide manufacturers with a methodology that will allow them to quickly assess the positional health of their robot systems along with supporting the verification and validation of PHM techniques for the robot system.

  8. Advanced carbon manufacturing for energy and biological applications

    NASA Astrophysics Data System (ADS)

    Turon Teixidor, Genis

    The science of miniaturization has experienced revolutionary advances during the last decades, witnessing the development of the Integrated Circuit and the emergence of MEMS and Nanotechnology. Particularly, MEMS technology has pioneered the use of non-traditional materials in microfabrication by including polymers, ceramics and composites to the well known list of metals and semiconductors. One of the latest additions to this set of materials is carbon, which represents a very important inclusion given its significance in electrochemical energy conversion systems and in applications where it is used as sensor probe material. For these applications, carbon is optimal in several counts: It has a wide electrochemical stability window, good electrical and thermal conductivity, high corrosion resistance and mechanical stability, and is available in high purity at a low cost. Furthermore carbon is biocompatible. This thesis presents several microfabricated devices that take advantage of these properties. The thesis has two clearly differentiated parts. In the first one, applications of micromachined carbon in the field of energy conversion and energy storage are presented. These applications include lithium ion micro batteries and the development of new carbon electrodes with fractal geometries. In the second part, the focus shifts to biological applications. First, the study of the interaction of living cells with micromachined carbon is presented, followed by the description of a sensor based on interdigitated nano-electrode arrays, and finally the development of the new instrumentation needed to address arrays of carbon electrodes, a multiplexed potentiostat. The underlying theme that connects all these seemingly different topics is the use of carbon microfabrication techniques in electrochemical systems.

  9. Isotope separation and advanced manufacturing technology. Volume 2, No. 2, Semiannual report, April--September 1993

    SciTech Connect

    Kan, Tehmanu; Carpenter, J.

    1993-12-31

    This is the second issue of a semiannual report for the Isotope Separation and Advanced Manufacturing (ISAM) Technology Program at Lawrence Livermore National Laboratory. Primary objectives of the ISAM Program include: the Uranium Atomic Vapor Laser Isotope Separation (U-AVLIS) process, and advanced manufacturing technologies which include industrial laser materials processing and new manufacturing technologies for uranium, plutonium, and other strategically important materials in support of DOE and other national applications. Topics included in this issue are: production plant product system conceptual design, development and operation of a solid-state switch for thyratron replacement, high-performance optical components for high average power laser systems, use of diode laser absorption spectroscopy for control of uranium vaporization rates, a two-dimensional time dependent hydrodynamical ion extraction model, and design of a formaldehyde photodissociation process for carbon and oxygen isotope separation.

  10. Advanced Manufacturing as an Online Case Study for Global Geography Education

    ERIC Educational Resources Information Center

    Glass, Michael R.; Kalafsky, Ronald V.; Drake, Dawn M.

    2013-01-01

    Advanced manufacturing continues to be an important sector for emerging and industrialized economies, therefore, remaining an important topic for economic geography education. This article describes a case study created for the Association of American Geographer's Center for Global Geography Education and its implementation. The international…

  11. Opportunities for the Advancement of Home Economists in the Food Manufacturing Industry.

    ERIC Educational Resources Information Center

    Michael, Carol M.

    1999-01-01

    Responses from 133 home economists employed by food manufacturers showed that many have high aspirations but few have advanced to upper-level management. Factors influencing business success included years with company and in career and mentor/sponsor relationships. Many felt limited by lack of business background and the service orientation of…

  12. National Skill Standards for Advanced High Performance Manufacturing. Version 2.1.

    ERIC Educational Resources Information Center

    National Coalition for Advanced Manufacturing, Washington, DC.

    This document presents and discusses the national skill standards for advanced high-performance manufacturing that were developed during a project that was commissioned by the U.S. Department of Education. The introduction explains the need for national skill standards. Discussed in the next three sections are the following: benefits of national…

  13. Large-scale tides in general relativity

    NASA Astrophysics Data System (ADS)

    Ip, Hiu Yan; Schmidt, Fabian

    2017-02-01

    Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lemaȋtre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the "separate universe" paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation of Hui and Bertschinger [1]. We also show that this very simple set of equations matches the exact evolution of the density field at second order, but fails at third and higher order. This provides a useful, easy-to-use framework for computing the fully relativistic growth of structure at second order.

  14. Large scale digital atlases in neuroscience

    NASA Astrophysics Data System (ADS)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  15. Food appropriation through large scale land acquisitions

    NASA Astrophysics Data System (ADS)

    Rulli, Maria Cristina; D'Odorico, Paolo

    2014-05-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.

  16. Large scale mechanical metamaterials as seismic shields

    NASA Astrophysics Data System (ADS)

    Miniaci, Marco; Krushynska, Anastasiia; Bosia, Federico; Pugno, Nicola M.

    2016-08-01

    Earthquakes represent one of the most catastrophic natural events affecting mankind. At present, a universally accepted risk mitigation strategy for seismic events remains to be proposed. Most approaches are based on vibration isolation of structures rather than on the remote shielding of incoming waves. In this work, we propose a novel approach to the problem and discuss the feasibility of a passive isolation strategy for seismic waves based on large-scale mechanical metamaterials, including for the first time numerical analysis of both surface and guided waves, soil dissipation effects, and adopting a full 3D simulations. The study focuses on realistic structures that can be effective in frequency ranges of interest for seismic waves, and optimal design criteria are provided, exploring different metamaterial configurations, combining phononic crystals and locally resonant structures and different ranges of mechanical properties. Dispersion analysis and full-scale 3D transient wave transmission simulations are carried out on finite size systems to assess the seismic wave amplitude attenuation in realistic conditions. Results reveal that both surface and bulk seismic waves can be considerably attenuated, making this strategy viable for the protection of civil structures against seismic risk. The proposed remote shielding approach could open up new perspectives in the field of seismology and in related areas of low-frequency vibration damping or blast protection.

  17. Large-scale carbon fiber tests

    NASA Technical Reports Server (NTRS)

    Pride, R. A.

    1980-01-01

    A realistic release of carbon fibers was established by burning a minimum of 45 kg of carbon fiber composite aircraft structural components in each of five large scale, outdoor aviation jet fuel fire tests. This release was quantified by several independent assessments with various instruments developed specifically for these tests. The most likely values for the mass of single carbon fibers released ranged from 0.2 percent of the initial mass of carbon fiber for the source tests (zero wind velocity) to a maximum of 0.6 percent of the initial carbon fiber mass for dissemination tests (5 to 6 m/s wind velocity). Mean fiber lengths for fibers greater than 1 mm in length ranged from 2.5 to 3.5 mm. Mean diameters ranged from 3.6 to 5.3 micrometers which was indicative of significant oxidation. Footprints of downwind dissemination of the fire released fibers were measured to 19.1 km from the fire.

  18. Evaluating Unmanned Aerial Platforms for Cultural Heritage Large Scale Mapping

    NASA Astrophysics Data System (ADS)

    Georgopoulos, A.; Oikonomou, C.; Adamopoulos, E.; Stathopoulou, E. K.

    2016-06-01

    When it comes to large scale mapping of limited areas especially for cultural heritage sites, things become critical. Optical and non-optical sensors are developed to such sizes and weights that can be lifted by such platforms, like e.g. LiDAR units. At the same time there is an increase in emphasis on solutions that enable users to get access to 3D information faster and cheaper. Considering the multitude of platforms, cameras and the advancement of algorithms in conjunction with the increase of available computing power this challenge should and indeed is further investigated. In this paper a short review of the UAS technologies today is attempted. A discussion follows as to their applicability and advantages, depending on their specifications, which vary immensely. The on-board cameras available are also compared and evaluated for large scale mapping. Furthermore a thorough analysis, review and experimentation with different software implementations of Structure from Motion and Multiple View Stereo algorithms, able to process such dense and mostly unordered sequence of digital images is also conducted and presented. As test data set, we use a rich optical and thermal data set from both fixed wing and multi-rotor platforms over an archaeological excavation with adverse height variations and using different cameras. Dense 3D point clouds, digital terrain models and orthophotos have been produced and evaluated for their radiometric as well as metric qualities.

  19. Sensitivity technologies for large scale simulation.

    SciTech Connect

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  20. Synchronization of coupled large-scale Boolean networks

    NASA Astrophysics Data System (ADS)

    Li, Fangfei

    2014-03-01

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  1. Synchronization of coupled large-scale Boolean networks

    SciTech Connect

    Li, Fangfei

    2014-03-15

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  2. Large-Scale Sequencing: The Future of Genomic Sciences Colloquium

    SciTech Connect

    Margaret Riley; Merry Buckley

    2009-01-01

    Genetic sequencing and the various molecular techniques it has enabled have revolutionized the field of microbiology. Examining and comparing the genetic sequences borne by microbes - including bacteria, archaea, viruses, and microbial eukaryotes - provides researchers insights into the processes microbes carry out, their pathogenic traits, and new ways to use microorganisms in medicine and manufacturing. Until recently, sequencing entire microbial genomes has been laborious and expensive, and the decision to sequence the genome of an organism was made on a case-by-case basis by individual researchers and funding agencies. Now, thanks to new technologies, the cost and effort of sequencing is within reach for even the smallest facilities, and the ability to sequence the genomes of a significant fraction of microbial life may be possible. The availability of numerous microbial genomes will enable unprecedented insights into microbial evolution, function, and physiology. However, the current ad hoc approach to gathering sequence data has resulted in an unbalanced and highly biased sampling of microbial diversity. A well-coordinated, large-scale effort to target the breadth and depth of microbial diversity would result in the greatest impact. The American Academy of Microbiology convened a colloquium to discuss the scientific benefits of engaging in a large-scale, taxonomically-based sequencing project. A group of individuals with expertise in microbiology, genomics, informatics, ecology, and evolution deliberated on the issues inherent in such an effort and generated a set of specific recommendations for how best to proceed. The vast majority of microbes are presently uncultured and, thus, pose significant challenges to such a taxonomically-based approach to sampling genome diversity. However, we have yet to even scratch the surface of the genomic diversity among cultured microbes. A coordinated sequencing effort of cultured organisms is an appropriate place to begin

  3. Large scale structure of the globular cluster population in Coma

    NASA Astrophysics Data System (ADS)

    Gagliano, Alexander T.; O'Neill, Conor; Madrid, Juan P.

    2016-01-01

    A search for globular cluster candidates in the Coma Cluster was carried out using Hubble Space Telescope data taken with the Advanced Camera for Surveys. We combine different observing programs including the Coma Treasury Survey in order to obtain the large scale distribution of globular clusters in Coma. Globular cluster candidates were selected through careful morphological inspection and a detailed analysis of their magnitude and colors in the two available wavebands, F475W (Sloan g) and F814W (I). Color Magnitude Diagrams, radial density plots and density maps were then created to characterize the globular cluster population in Coma. Preliminary results show the structure of the intergalactic globular cluster system throughout Coma, among the largest globular clusters catalogues to date. The spatial distribution of globular clusters shows clear overdensities, or bridges, between Coma galaxies. It also becomes evident that galaxies of similar luminosity have vastly different numbers of associated globular clusters.

  4. Computational solutions to large-scale data management and analysis

    PubMed Central

    Schadt, Eric E.; Linderman, Michael D.; Sorenson, Jon; Lee, Lawrence; Nolan, Garry P.

    2011-01-01

    Today we can generate hundreds of gigabases of DNA and RNA sequencing data in a week for less than US$5,000. The astonishing rate of data generation by these low-cost, high-throughput technologies in genomics is being matched by that of other technologies, such as real-time imaging and mass spectrometry-based flow cytometry. Success in the life sciences will depend on our ability to properly interpret the large-scale, high-dimensional data sets that are generated by these technologies, which in turn requires us to adopt advances in informatics. Here we discuss how we can master the different types of computational environments that exist — such as cloud and heterogeneous computing — to successfully tackle our big data problems. PMID:20717155

  5. Large-Scale Quantitative Analysis of Painting Arts

    PubMed Central

    Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong

    2014-01-01

    Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images – the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances. PMID:25501877

  6. Large-Scale Quantitative Analysis of Painting Arts

    NASA Astrophysics Data System (ADS)

    Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong

    2014-12-01

    Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images - the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances.

  7. Towards large-scale, human-based, mesoscopic neurotechnologies.

    PubMed

    Chang, Edward F

    2015-04-08

    Direct human brain recordings have transformed the scope of neuroscience in the past decade. Progress has relied upon currently available neurophysiological approaches in the context of patients undergoing neurosurgical procedures for medical treatment. While this setting has provided precious opportunities for scientific research, it also has presented significant constraints on the development of new neurotechnologies. A major challenge now is how to achieve high-resolution spatiotemporal neural recordings at a large scale. By narrowing the gap between current approaches, new directions tailored to the mesoscopic (intermediate) scale of resolution may overcome the barriers towards safe and reliable human-based neurotechnology development, with major implications for advancing both basic research and clinical translation.

  8. Applications of large-scale density functional theory in biology

    NASA Astrophysics Data System (ADS)

    Cole, Daniel J.; Hine, Nicholas D. M.

    2016-10-01

    Density functional theory (DFT) has become a routine tool for the computation of electronic structure in the physics, materials and chemistry fields. Yet the application of traditional DFT to problems in the biological sciences is hindered, to a large extent, by the unfavourable scaling of the computational effort with system size. Here, we review some of the major software and functionality advances that enable insightful electronic structure calculations to be performed on systems comprising many thousands of atoms. We describe some of the early applications of large-scale DFT to the computation of the electronic properties and structure of biomolecules, as well as to paradigmatic problems in enzymology, metalloproteins, photosynthesis and computer-aided drug design. With this review, we hope to demonstrate that first principles modelling of biological structure-function relationships are approaching a reality.

  9. Large scale rigidity-based flexibility analysis of biomolecules

    PubMed Central

    Streinu, Ileana

    2016-01-01

    KINematics And RIgidity (KINARI) is an on-going project for in silico flexibility analysis of proteins. The new version of the software, Kinari-2, extends the functionality of our free web server KinariWeb, incorporates advanced web technologies, emphasizes the reproducibility of its experiments, and makes substantially improved tools available to the user. It is designed specifically for large scale experiments, in particular, for (a) very large molecules, including bioassemblies with high degree of symmetry such as viruses and crystals, (b) large collections of related biomolecules, such as those obtained through simulated dilutions, mutations, or conformational changes from various types of dynamics simulations, and (c) is intended to work as seemlessly as possible on the large, idiosyncratic, publicly available repository of biomolecules, the Protein Data Bank. We describe the system design, along with the main data processing, computational, mathematical, and validation challenges underlying this phase of the KINARI project. PMID:26958583

  10. Battery technologies for large-scale stationary energy storage.

    PubMed

    Soloveichik, Grigorii L

    2011-01-01

    In recent years, with the deployment of renewable energy sources, advances in electrified transportation, and development in smart grids, the markets for large-scale stationary energy storage have grown rapidly. Electrochemical energy storage methods are strong candidate solutions due to their high energy density, flexibility, and scalability. This review provides an overview of mature and emerging technologies for secondary and redox flow batteries. New developments in the chemistry of secondary and flow batteries as well as regenerative fuel cells are also considered. Advantages and disadvantages of current and prospective electrochemical energy storage options are discussed. The most promising technologies in the short term are high-temperature sodium batteries with β″-alumina electrolyte, lithium-ion batteries, and flow batteries. Regenerative fuel cells and lithium metal batteries with high energy density require further research to become practical.

  11. Innovative and Efficient Manufacturing Technologies for Highly Advanced Composite Pressure Vessels

    NASA Astrophysics Data System (ADS)

    Hock, Birte; Regnet, Martin; Bickelaier, Stefan; Henne, Florian; Sause, Markus G. R.; Schmidt, Thomas; Geiss, Gunter

    2014-06-01

    The currently ongoing development project at MT Aerospace (MTA) deals with a cost efficient manufacturing process for space structures. Thermoplastic fibre placement, which was identified as one of the most forward-looking technologies, promises advantages such as shorter cycle times and a high level of automation. In addition to the manufacturing method, research activities on non-destructive inspection methods and on acoustic emission analysis are performed. The analysis of the components will also be improved using advanced modelling approaches. The capability of the processes and methods will be shown on the basis of a scaled solid rocket motor casing.

  12. Large-Scale Graphene Film Deposition for Monolithic Device Fabrication

    NASA Astrophysics Data System (ADS)

    Al-shurman, Khaled

    Since 1958, the concept of integrated circuit (IC) has achieved great technological developments and helped in shrinking electronic devices. Nowadays, an IC consists of more than a million of compacted transistors. The majority of current ICs use silicon as a semiconductor material. According to Moore's law, the number of transistors built-in on a microchip can be double every two years. However, silicon device manufacturing reaches its physical limits. To explain, there is a new trend to shrinking circuitry to seven nanometers where a lot of unknown quantum effects such as tunneling effect can not be controlled. Hence, there is an urgent need for a new platform material to replace Si. Graphene is considered a promising material with enormous potential applications in many electronic and optoelectronics devices due to its superior properties. There are several techniques to produce graphene films. Among these techniques, chemical vapor deposition (CVD) offers a very convenient method to fabricate films for large-scale graphene films. Though CVD method is suitable for large area growth of graphene, the need for transferring a graphene film to silicon-based substrates is required. Furthermore, the graphene films thus achieved are, in fact, not single crystalline. Also, graphene fabrication utilizing Cu and Ni at high growth temperature contaminates the substrate that holds Si CMOS circuitry and CVD chamber as well. So, lowering the deposition temperature is another technological milestone for the successful adoption of graphene in integrated circuits fabrication. In this research, direct large-scale graphene film fabrication on silicon based platform (i.e. SiO2 and Si3N4) at low temperature was achieved. With a focus on low-temperature graphene growth, hot-filament chemical vapor deposition (HF-CVD) was utilized to synthesize graphene film using 200 nm thick nickel film. Raman spectroscopy was utilized to examine graphene formation on the bottom side of the Ni film

  13. Present Status and Future Growth of Advanced Maintenance Technology and Strategy in US Manufacturing

    PubMed Central

    Jin, Xiaoning; Weiss, Brian A.; Siegel, David; Lee, Jay

    2016-01-01

    The goals of this paper are to 1) examine the current practices of diagnostics, prognostics, and maintenance employed by United States (U.S.) manufacturers to achieve productivity and quality targets and 2) to understand the present level of maintenance technologies and strategies that are being incorporated into these practices. A study is performed to contrast the impact of various industry-specific factors on the effectiveness and profitability of the implementation of prognostics and health management technologies, and maintenance strategies using both surveys and case studies on a sample of U.S. manufacturing firms ranging from small to mid-sized enterprises (SMEs) to large-sized manufacturing enterprises in various industries. The results obtained provide important insights on the different impacts of specific factors on the successful adoption of these technologies between SMEs and large manufacturing enterprises. The varying degrees of success with respect to current maintenance programs highlight the opportunity for larger manufacturers to improve maintenance practices and consider the use of advanced prognostics and health management (PHM) technology. This paper also provides the existing gaps, barriers, future trends, and roadmaps for manufacturing PHM technology and maintenance strategy. PMID:28058173

  14. Present Status and Future Growth of Advanced Maintenance Technology and Strategy in US Manufacturing.

    PubMed

    Jin, Xiaoning; Weiss, Brian A; Siegel, David; Lee, Jay

    2016-01-01

    The goals of this paper are to 1) examine the current practices of diagnostics, prognostics, and maintenance employed by United States (U.S.) manufacturers to achieve productivity and quality targets and 2) to understand the present level of maintenance technologies and strategies that are being incorporated into these practices. A study is performed to contrast the impact of various industry-specific factors on the effectiveness and profitability of the implementation of prognostics and health management technologies, and maintenance strategies using both surveys and case studies on a sample of U.S. manufacturing firms ranging from small to mid-sized enterprises (SMEs) to large-sized manufacturing enterprises in various industries. The results obtained provide important insights on the different impacts of specific factors on the successful adoption of these technologies between SMEs and large manufacturing enterprises. The varying degrees of success with respect to current maintenance programs highlight the opportunity for larger manufacturers to improve maintenance practices and consider the use of advanced prognostics and health management (PHM) technology. This paper also provides the existing gaps, barriers, future trends, and roadmaps for manufacturing PHM technology and maintenance strategy.

  15. Design advanced for large-scale, economic, floating LNG plant

    SciTech Connect

    Naklie, M.M.

    1997-06-30

    A floating LNG plant design has been developed which is technically feasible, economical, safe, and reliable. This technology will allow monetization of small marginal fields and improve the economics of large fields. Mobil`s world-scale plant design has a capacity of 6 million tons/year of LNG and up to 55,000 b/d condensate produced from 1 bcfd of feed gas. The plant would be located on a large, secure, concrete barge with a central moonpool. LNG storage is provided for 250,000 cu m and condensate storage for 650,000 bbl. And both products are off-loaded from the barge. Model tests have verified the stability of the barge structure: barge motions are low enough to permit the plant to continue operation in a 100-year storm in the Pacific Rim. Moreover, the barge is spread-moored, eliminating the need for a turret and swivel. Because the design is generic, the plant can process a wide variety of feed gases and operate in different environments, should the plant be relocated. This capability potentially gives the plant investment a much longer project life because its use is not limited to the life of only one producing area.

  16. Advanced I/O for large-scale scientific applications.

    SciTech Connect

    Klasky, Scott; Schwan, Karsten; Oldfield, Ron A.; Lofstead, Gerald F., II

    2010-01-01

    As scientific simulations scale to use petascale machines and beyond, the data volumes generated pose a dual problem. First, with increasing machine sizes, the careful tuning of IO routines becomes more and more important to keep the time spent in IO acceptable. It is not uncommon, for instance, to have 20% of an application's runtime spent performing IO in a 'tuned' system. Careful management of the IO routines can move that to 5% or even less in some cases. Second, the data volumes are so large, on the order of 10s to 100s of TB, that trying to discover the scientifically valid contributions requires assistance at runtime to both organize and annotate the data. Waiting for offline processing is not feasible due both to the impact on the IO system and the time required. To reduce this load and improve the ability of scientists to use the large amounts of data being produced, new techniques for data management are required. First, there is a need for techniques for efficient movement of data from the compute space to storage. These techniques should understand the underlying system infrastructure and adapt to changing system conditions. Technologies include aggregation networks, data staging nodes for a closer parity to the IO subsystem, and autonomic IO routines that can detect system bottlenecks and choose different approaches, such as splitting the output into multiple targets, staggering output processes. Such methods must be end-to-end, meaning that even with properly managed asynchronous techniques, it is still essential to properly manage the later synchronous interaction with the storage system to maintain acceptable performance. Second, for the data being generated, annotations and other metadata must be incorporated to help the scientist understand output data for the simulation run as a whole, to select data and data features without concern for what files or other storage technologies were employed. All of these features should be attained while maintaining a simple deployment for the science code and eliminating the need for allocation of additional computational resources.

  17. Large-Scale Spacecraft Fire Safety Tests

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; Toth, Balazs; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Jomaas, Grunde

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  18. Large Scale CW ECRH Systems: Some considerations

    NASA Astrophysics Data System (ADS)

    Erckmann, V.; Kasparek, W.; Plaum, B.; Lechte, C.; Petelin, M. I.; Braune, H.; Gantenbein, G.; Laqua, H. P.; Lubiako, L.; Marushchenko, N. B.; Michel, G.; Turkin, Y.; Weissgerber, M.

    2012-09-01

    Electron Cyclotron Resonance Heating (ECRH) is a key component in the heating arsenal for the next step fusion devices like W7-X and ITER. These devices are equipped with superconducting coils and are designed to operate steady state. ECRH must thus operate in CW-mode with a large flexibility to comply with various physics demands such as plasma start-up, heating and current drive, as well as configurationand MHD - control. The request for many different sophisticated applications results in a growing complexity, which is in conflict with the request for high availability, reliability, and maintainability. `Advanced' ECRH-systems must, therefore, comply with both the complex physics demands and operational robustness and reliability. The W7-X ECRH system is the first CW- facility of an ITER relevant size and is used as a test bed for advanced components. Proposals for future developments are presented together with improvements of gyrotrons, transmission components and launchers.

  19. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2010-09-30

    advance anywhere on the globe. NAAPS and COAMPS are particularly useful for forecasts of dust storms in areas downwind of the large deserts of the world... dust source regions in NAAPS. The DSD has been crucial for high-resolution dust forecasting in SW Asia using COAMPS (Walker et al., 2009). Dust ...6 Figure 2. Four-panel product used to compare multiple model forecasts of visibility in SW Asia dust storms . On the web the product is

  20. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2007-09-30

    to six days in advance anywhere on the globe. NAAPS and COAMPS are particularly useful for forecasts of dust storms in areas downwind of the large...in FY08. NAAPS forecasts of CONUS dust storms and long-range dust transport to CONUS were further evaluated in collaboration with CSU. These...visibility. The regional model ( COAMPS /Aerosol) became operational during OIF. The global model Navy Aerosol Analysis and Prediction System (NAAPS

  1. Population generation for large-scale simulation

    NASA Astrophysics Data System (ADS)

    Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul

    2005-05-01

    Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more

  2. Large-scale assembly of colloidal particles

    NASA Astrophysics Data System (ADS)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  3. Timing signatures of large scale solar eruptions

    NASA Astrophysics Data System (ADS)

    Balasubramaniam, K. S.; Hock-Mysliwiec, Rachel; Henry, Timothy; Kirk, Michael S.

    2016-05-01

    We examine the timing signatures of large solar eruptions resulting in flares, CMEs and Solar Energetic Particle events. We probe solar active regions from the chromosphere through the corona, using data from space and ground-based observations, including ISOON, SDO, GONG, and GOES. Our studies include a number of flares and CMEs of mostly the M- and X-strengths as categorized by GOES. We find that the chromospheric signatures of these large eruptions occur 5-30 minutes in advance of coronal high temperature signatures. These timing measurements are then used as inputs to models and reconstruct the eruptive nature of these systems, and explore their utility in forecasts.

  4. State of the Art in Large-Scale Soil Moisture Monitoring

    NASA Technical Reports Server (NTRS)

    Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.; Zreda, Marek G.

    2013-01-01

    Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.

  5. Large scale, urban decontamination; developments, historical examples and lessons learned

    SciTech Connect

    Demmer, R.L.

    2007-07-01

    Recent terrorist threats and actions have lead to a renewed interest in the technical field of large scale, urban environment decontamination. One of the driving forces for this interest is the prospect for the cleanup and removal of radioactive dispersal device (RDD or 'dirty bomb') residues. In response, the United States Government has spent many millions of dollars investigating RDD contamination and novel decontamination methodologies. The efficiency of RDD cleanup response will be improved with these new developments and a better understanding of the 'old reliable' methodologies. While an RDD is primarily an economic and psychological weapon, the need to cleanup and return valuable or culturally significant resources to the public is nonetheless valid. Several private companies, universities and National Laboratories are currently developing novel RDD cleanup technologies. Because of its longstanding association with radioactive facilities, the U. S. Department of Energy National Laboratories are at the forefront in developing and testing new RDD decontamination methods. However, such cleanup technologies are likely to be fairly task specific; while many different contamination mechanisms, substrate and environmental conditions will make actual application more complicated. Some major efforts have also been made to model potential contamination, to evaluate both old and new decontamination techniques and to assess their readiness for use. There are a number of significant lessons that can be gained from a look at previous large scale cleanup projects. Too often we are quick to apply a costly 'package and dispose' method when sound technological cleaning approaches are available. Understanding historical perspectives, advanced planning and constant technology improvement are essential to successful decontamination. (authors)

  6. Large-scale quantum photonic circuits in silicon

    NASA Astrophysics Data System (ADS)

    Harris, Nicholas C.; Bunandar, Darius; Pant, Mihir; Steinbrecher, Greg R.; Mower, Jacob; Prabhu, Mihika; Baehr-Jones, Tom; Hochberg, Michael; Englund, Dirk

    2016-08-01

    Quantum information science offers inherently more powerful methods for communication, computation, and precision measurement that take advantage of quantum superposition and entanglement. In recent years, theoretical and experimental advances in quantum computing and simulation with photons have spurred great interest in developing large photonic entangled states that challenge today's classical computers. As experiments have increased in complexity, there has been an increasing need to transition bulk optics experiments to integrated photonics platforms to control more spatial modes with higher fidelity and phase stability. The silicon-on-insulator (SOI) nanophotonics platform offers new possibilities for quantum optics, including the integration of bright, nonclassical light sources, based on the large third-order nonlinearity (χ(3)) of silicon, alongside quantum state manipulation circuits with thousands of optical elements, all on a single phase-stable chip. How large do these photonic systems need to be? Recent theoretical work on Boson Sampling suggests that even the problem of sampling from e30 identical photons, having passed through an interferometer of hundreds of modes, becomes challenging for classical computers. While experiments of this size are still challenging, the SOI platform has the required component density to enable low-loss and programmable interferometers for manipulating hundreds of spatial modes. Here, we discuss the SOI nanophotonics platform for quantum photonic circuits with hundreds-to-thousands of optical elements and the associated challenges. We compare SOI to competing technologies in terms of requirements for quantum optical systems. We review recent results on large-scale quantum state evolution circuits and strategies for realizing high-fidelity heralded gates with imperfect, practical systems. Next, we review recent results on silicon photonics-based photon-pair sources and device architectures, and we discuss a path towards

  7. Multitree Algorithms for Large-Scale Astrostatistics

    NASA Astrophysics Data System (ADS)

    March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.

    2012-03-01

    this number every week, resulting in billions of objects. At such scales, even linear-time analysis operations present challenges, particularly since statistical analyses are inherently interactive processes, requiring that computations complete within some reasonable human attention span. The quadratic (or worse) runtimes of straightforward implementations become quickly unbearable. Examples of applications. These analysis subroutines occur ubiquitously in astrostatistical work. We list just a few examples. The need to cross-match objects across different catalogs has led to various algorithms, which at some point perform an AllNN computation. 2-point and higher-order spatial correlations for the basis of spatial statistics, and are utilized in astronomy to compare the spatial structures of two datasets, such as an observed sample and a theoretical sample, for example, forming the basis for two-sample hypothesis testing. Friends-of-friends clustering is often used to identify halos in data from astrophysical simulations. Minimum spanning tree properties have also been proposed as statistics of large-scale structure. Comparison of the distributions of different kinds of objects requires accurate density estimation, for which KDE is the overall statistical method of choice. The prediction of redshifts from optical data requires accurate regression, for which kernel regression is a powerful method. The identification of objects of various types in astronomy, such as stars versus galaxies, requires accurate classification, for which KDA is a powerful method. Overview. In this chapter, we will briefly sketch the main ideas behind recent fast algorithms which achieve, for example, linear runtimes for pairwise-distance problems, or similarly dramatic reductions in computational growth. In some cases, the runtime orders for these algorithms are mathematically provable statements, while in others we have only conjectures backed by experimental observations for the time being

  8. Bioinspired large-scale aligned porous materials assembled with dual temperature gradients

    PubMed Central

    Bai, Hao; Chen, Yuan; Delattre, Benjamin; Tomsia, Antoni P.; Ritchie, Robert O.

    2015-01-01

    Natural materials, such as bone, teeth, shells, and wood, exhibit outstanding properties despite being porous and made of weak constituents. Frequently, they represent a source of inspiration to design strong, tough, and lightweight materials. Although many techniques have been introduced to create such structures, a long-range order of the porosity as well as a precise control of the final architecture remain difficult to achieve. These limitations severely hinder the scale-up fabrication of layered structures aimed for larger applications. We report on a bidirectional freezing technique to successfully assemble ceramic particles into scaffolds with large-scale aligned, lamellar, porous, nacre-like structure and long-range order at the centimeter scale. This is achieved by modifying the cold finger with a polydimethylsiloxane (PDMS) wedge to control the nucleation and growth of ice crystals under dual temperature gradients. Our approach could provide an effective way of manufacturing novel bioinspired structural materials, in particular advanced materials such as composites, where a higher level of control over the structure is required. PMID:26824062

  9. Bio-Inspired Wooden Actuators for Large Scale Applications

    PubMed Central

    Rüggeberg, Markus; Burgert, Ingo

    2015-01-01

    Implementing programmable actuation into materials and structures is a major topic in the field of smart materials. In particular the bilayer principle has been employed to develop actuators that respond to various kinds of stimuli. A multitude of small scale applications down to micrometer size have been developed, but up-scaling remains challenging due to either limitations in mechanical stiffness of the material or in the manufacturing processes. Here, we demonstrate the actuation of wooden bilayers in response to changes in relative humidity, making use of the high material stiffness and a good machinability to reach large scale actuation and application. Amplitude and response time of the actuation were measured and can be predicted and controlled by adapting the geometry and the constitution of the bilayers. Field tests in full weathering conditions revealed long-term stability of the actuation. The potential of the concept is shown by a first demonstrator. With the sensor and actuator intrinsically incorporated in the wooden bilayers, the daily change in relative humidity is exploited for an autonomous and solar powered movement of a tracker for solar modules. PMID:25835386

  10. Bio-inspired wooden actuators for large scale applications.

    PubMed

    Rüggeberg, Markus; Burgert, Ingo

    2015-01-01

    Implementing programmable actuation into materials and structures is a major topic in the field of smart materials. In particular the bilayer principle has been employed to develop actuators that respond to various kinds of stimuli. A multitude of small scale applications down to micrometer size have been developed, but up-scaling remains challenging due to either limitations in mechanical stiffness of the material or in the manufacturing processes. Here, we demonstrate the actuation of wooden bilayers in response to changes in relative humidity, making use of the high material stiffness and a good machinability to reach large scale actuation and application. Amplitude and response time of the actuation were measured and can be predicted and controlled by adapting the geometry and the constitution of the bilayers. Field tests in full weathering conditions revealed long-term stability of the actuation. The potential of the concept is shown by a first demonstrator. With the sensor and actuator intrinsically incorporated in the wooden bilayers, the daily change in relative humidity is exploited for an autonomous and solar powered movement of a tracker for solar modules.

  11. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations

    PubMed Central

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts. PMID:25993414

  12. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations.

    PubMed

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts.

  13. The advanced manufacturing science and technology program. FY 95 Annual Report

    SciTech Connect

    Hill, J.

    1996-03-01

    This is the Fiscal Year 1995 Annual Report for the Advanced Manufacturing Science and Technology (AMST) sector of Los Alamos Tactical Goal 6, Industrial Partnering. During this past fiscal year, the AMST project leader formed a committee whose members represented the divisions and program offices with a manufacturing interest to examine the Laboratory`s expertise and needs in manufacturing. From a list of about two hundred interest areas, the committee selected nineteen of the most pressing needs for weapon manufacturing. Based upon Los Alamos mission requirements and the needs of the weapon manufacturing (Advanced Design and Production Technologies (ADaPT)) program plan and the other tactical goals, the committee selected four of the nineteen areas for strategic planning and possible industrial partnering. The areas selected were Casting Technology, Constitutive Modeling, Non-Destructive Testing and Evaluation, and Polymer Aging and Lifetime Prediction. For each area, the AMST committee formed a team to write a roadmap and serve as a partnering technical consultant. To date, the roadmaps have been completed for each of the four areas. The Casting Technology and Polymer Aging teams are negotiating with specific potential partners now, at the close of the fiscal year. For each focus area we have created a list of existing collaborations and other ongoing partnering activities. In early Fiscal Year 1996, we will continue to develop partnerships in these four areas. Los Alamos National Laboratory instituted the tactical goals for industrial partnering to focus our institutional resources on partnerships that enhance core competencies and capabilities required to meet our national security mission of reducing the nuclear danger. The second industry sector targeted by Tactical Goal 6 was the chemical industry. Tactical Goal 6 is championed by the Industrial Partnership Office.

  14. Global Wildfire Forecasts Using Large Scale Climate Indices

    NASA Astrophysics Data System (ADS)

    Shen, Huizhong; Tao, Shu

    2016-04-01

    Using weather readings, fire early warning can provided forecast 4-6 hour in advance to minimize fire loss. The benefit would be dramatically enhanced if relatively accurate long-term projection can be also provided. Here we present a novel method for predicting global fire season severity (FSS) at least three months in advance using multiple large-scale climate indices (CIs). The predictive ability is proven effective for various geographic locations and resolution. Globally, as well as in most continents, the El Niño Southern Oscillation (ENSO) is the dominant driving force controlling interannual FSS variability, whereas other CIs also play indispensable roles. We found that a moderate El Niño event is responsible for 465 (272-658 as interquartile range) Tg carbon release and an annual increase of 29,500 (24,500-34,800) deaths from inhalation exposure to air pollutants. Southeast Asia accounts for half of the deaths. Both intercorrelation and interaction of WPs and CIs are revealed, suggesting possible climate-induced modification of fire responses to weather conditions. Our models can benefit fire management in response to climate change.

  15. Large Scale Geologic Controls on Hydraulic Stimulation

    NASA Astrophysics Data System (ADS)

    McLennan, J. D.; Bhide, R.

    2014-12-01

    When simulating a hydraulic fracturing, the analyst has historically prescribed a single planar fracture. Originally (in the 1950s through the 1970s) this was necessitated by computational restrictions. In the latter part of the twentieth century, hydraulic fracture simulation evolved to incorporate vertical propagation controlled by modulus, fluid loss, and the minimum principal stress. With improvements in software, computational capacity, and recognition that in-situ discontinuities are relevant, fully three-dimensional hydraulic simulation is now becoming possible. Advances in simulation capabilities enable coupling structural geologic data (three-dimensional representation of stresses, natural fractures, and stratigraphy) with decision making processes for stimulation - volumes, rates, fluid types, completion zones. Without this interaction between simulation capabilities and geological information, low permeability formation exploitation may linger on the fringes of real economic viability. Comparative simulations have been undertaken in varying structural environments where the stress contrast and the frequency of natural discontinuities causes varying patterns of multiple, hydraulically generated or reactivated flow paths. Stress conditions and nature of the discontinuities are selected as variables and are used to simulate how fracturing can vary in different structural regimes. The basis of the simulations is commercial distinct element software (Itasca Corporation's 3DEC).

  16. Nonlinear large-scale optimization with WORHP

    NASA Astrophysics Data System (ADS)

    Nikolayzik, Tim; Büskens, Christof; Gerdts, Matthias

    Nonlinear optimization has grown to a key technology in many areas of aerospace industry, e.g. satellite control, shape-optimization, aerodynamamics, trajectory planning, reentry prob-lems, interplanetary flights. One of the most extensive areas is the optimization of trajectories for aerospace applications. These problems typically are discretized optimal control problems, which leads to large sparse nonlinear optimization problems. In the end all these different problems from different areas can be described in the general formulation as a nonlinear opti-mization problem. WORHP is designed to solve nonlinear optimization problems with more then one million variables and one million constraints. WORHP uses a lot of different advanced techniques, e.g. reverse communication, to organize the optimization process as efficient and controllable by the user as possible. The solver has nine different interfaces, e.g. to MAT-LAB/SIMULINK and AMPL. Tests of WORHP had shown that WORHP is a very robust and promising solver. Several examples from space applications will be presented.

  17. Advanced manufacturing rules check (MRC) for fully automated assessment of complex reticle designs: Part II

    NASA Astrophysics Data System (ADS)

    Straub, J. A.; Aguilar, D.; Buck, P. D.; Dawkins, D.; Gladhill, R.; Nolke, S.; Riddick, J.

    2006-10-01

    Advanced electronic design automation (EDA) tools, with their simulation, modeling, design rule checking, and optical proximity correction capabilities, have facilitated the improvement of first pass wafer yields. While the data produced by these tools may have been processed for optimal wafer manufacturing, it is possible for the same data to be far from ideal for photomask manufacturing, particularly at lithography and inspection stages, resulting in production delays and increased costs. The same EDA tools used to produce the data can be used to detect potential problems for photomask manufacturing in the data. In the previous paper, it was shown how photomask MRC is used to uncover data related problems prior to automated defect inspection. It was demonstrated how jobs which are likely to have problems at inspection could be identified and separated from those which are not. The use of photomask MRC in production was shown to reduce time lost to aborted runs and troubleshooting due to data issues. In this paper, the effectiveness of this photomask MRC program in a high volume photomask factory over the course of a year as applied to more than ten thousand jobs will be shown. Statistics on the results of the MRC runs will be presented along with the associated impact to the automated defect inspection process. Common design problems will be shown as well as their impact to mask manufacturing throughput and productivity. Finally, solutions to the most common and most severe problems will be offered and discussed.

  18. New Paradigms in International University/Industry/Government Cooperation. Canada-China Collaboration in Advanced Manufacturing Technologies.

    ERIC Educational Resources Information Center

    Bulgak, Akif Asil; Liquan, He

    1996-01-01

    A Chinese university and a Canadian university collaborated on an advanced manufacturing technologies project designed to address human resource development needs in China. The project featured university/industry/government partnership and attention to environmental issues. (SK)

  19. Analysis of labor productivity using large-scale data of firm's financial statements

    NASA Astrophysics Data System (ADS)

    Ikeda, Y.; Souma, W.; Aoyama, H.; Fujiwara, Y.; Iyetomi, H.

    2010-08-01

    We investigated labor productivity distribution by analyzing large-scale financial statement data consisting of listed and unlisted Japanese firms to clarify the characteristics of the Japanese labor market. Both high and low productivity sides of the labor productivity distribution follows the power-law distribution. Large inequality in the low productivity side was observed only for the manufacturing sectors in Japan fiscal year (JFY) 1999 and observed for both the manufacturing and non-manufacturing sectors in JFY 2002. The decline in the Japanese GDP in JFY 1999 and JFY 2002 were coincided with the large inequality in the low productivity side of the distribution. A lower peak was found for all non-manufacturing sectors. This might be the origin of the low productivity of the non-manufacturing sectors reported in recent economic studies.

  20. Advanced manufacturing technology effectiveness: A review of literature and some issues

    NASA Astrophysics Data System (ADS)

    Goyal, Sanjeev; Grover, Sandeep

    2012-09-01

    Advanced manufacturing technology (AMT) provides advantages to manufacturing managers in terms of flexibility, quality, reduced delivery times, and global competitiveness. Although a large number of publications had presented the importance of this technology, only a few had delved into related literature review. Considering the importance of this technology and the recent contributions by various authors, the present paper conducts a more comprehensive review. Literature was reviewed in a way that will help researchers, academicians, and practitioners to take a closer look at the implementation, evaluation, and justification of the AMT. The authors reviewed various papers, proposed a different classification scheme, and identified certain gaps that will provide hints for further research in AMT management.

  1. An experiment in remote manufacturing using the advanced communications technology satellite

    NASA Technical Reports Server (NTRS)

    Tsatsoulis, Costas; Frost, Victor

    1991-01-01

    The goal of the completed project was to develop an experiment in remote manufacturing that would use the capabilities of the ACTS satellite. A set of possible experiments that could be performed using the Advanced Communications Technology Satellite (ACTS), and which would perform remote manufacturing using a laser cutter and an integrated circuit testing machine are described in detail. The proposed design is shown to be a feasible solution to the offered problem and it takes into consideration the constraints that were placed on the experiment. In addition, we have developed two more experiments that are included in this report: backup of rural telecommunication networks, and remote use of Synthetic Aperture Radar (SAR) data analysis for on-site collection of glacier scattering data in the Antarctic.

  2. Information Tailoring Enhancements for Large-Scale Social Data

    DTIC Science & Technology

    2016-06-15

    Intelligent Automation Incorporated Information Tailoring Enhancements for Large-Scale...Automation Incorporated Progress Report No. 3 Information Tailoring Enhancements for Large-Scale Social Data Submitted in accordance with...also gathers information about entities from all news articles and displays it on over one million entity pages [5][6], and the information is made

  3. Large scale remote sensing for environmental monitoring of infrastructure.

    PubMed

    Whelan, Matthew J; Fuchs, Michael P; Janoyan, Kerop D

    2008-07-01

    Recent developments in wireless sensor technology afford the opportunity to rapidly and easily deploy large-scale, low-cost, and low-power sensor networks across relatively sizeable environmental regions. Furthermore, the advancement of increasingly smaller and less expensive wireless hardware is further complemented by the rapid development of open-source software components. These software protocols allow for interfacing with the hardware to program and configure the onboard processing and communication settings. In general, a wireless sensor network topology consists of an array of microprocessor boards, referred to as motes, which can engage in two-way communication among each other as well as with a base station that relays the mote data to a host computer. The information can then be either logged and displayed on the local host or directed to an http server for network monitoring remote from the site. A number of wireless sensor products are available that offer off-the-shelf network hardware as well as sensor solutions for environmental monitoring that are compatible with the TinyOS open-source software platform. This paper presents an introduction to wireless sensing and to the use of external antennas for increasing the antenna radiation intensity and shaping signal directivity for monitoring applications requiring larger mote-to-mote communication distances.

  4. Planck intermediate results. XLII. Large-scale Galactic magnetic fields

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Adam, R.; Ade, P. A. R.; Alves, M. I. R.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Chiang, H. C.; Christensen, P. R.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dolag, K.; Doré, O.; Ducout, A.; Dupac, X.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Ferrière, K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Galeotta, S.; Ganga, K.; Ghosh, T.; Giard, M.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Harrison, D. L.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hobson, M.; Hornstrup, A.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leahy, J. P.; Leonardi, R.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; Melchiorri, A.; Mennella, A.; Migliaccio, M.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Nørgaard-Nielsen, H. U.; Oppermann, N.; Orlando, E.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Pasian, F.; Perotto, L.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Pratt, G. W.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Scott, D.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Strong, A. W.; Sudiwala, R.; Sunyaev, R.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Valenziano, L.; Valiviita, J.; Van Tent, F.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-12-01

    Recent models for the large-scale Galactic magnetic fields in the literature have been largely constrained by synchrotron emission and Faraday rotation measures. We use three different but representative models to compare their predicted polarized synchrotron and dust emission with that measured by the Planck satellite. We first update these models to match the Planck synchrotron products using a common model for the cosmic-ray leptons. We discuss the impact on this analysis of the ongoing problems of component separation in the Planck microwave bands and of the uncertain cosmic-ray spectrum. In particular, the inferred degree of ordering in the magnetic fields is sensitive to these systematic uncertainties, and we further show the importance of considering the expected variations in the observables in addition to their mean morphology. We then compare the resulting simulated emission to the observed dust polarization and find that the dust predictions do not match the morphology in the Planck data but underpredict the dust polarization away from the plane. We modify one of the models to roughly match both observables at high latitudes by increasing the field ordering in the thin disc near the observer. Though this specific analysis is dependent on the component separation issues, we present the improved model as a proof of concept for how these studies can be advanced in future using complementary information from ongoing and planned observational projects.

  5. Process control of large-scale finite element simulation software

    SciTech Connect

    Spence, P.A.; Weingarten, L.I.; Schroder, K.; Tung, D.M.; Sheaffer, D.A.

    1996-02-01

    We have developed a methodology for coupling large-scale numerical codes with process control algorithms. Closed-loop simulations were demonstrated using the Sandia-developed finite element thermal code TACO and the commercially available finite element thermal-mechanical code ABAQUS. This new capability enables us to use computational simulations for designing and prototyping advanced process-control systems. By testing control algorithms on simulators before building and testing hardware, enormous time and cost savings can be realized. The need for a closed-loop simulation capability was demonstrated in a detailed design study of a rapid-thermal-processing reactor under development by CVC Products Inc. Using a thermal model of the RTP system as a surrogate for the actual hardware, we were able to generate response data needed for controller design. We then evaluated the performance of both the controller design and the hardware design by using the controller to drive the finite element model. The controlled simulations provided data on wafer temperature uniformity as a function of ramp rate, temperature sensor locations, and controller gain. This information, which is critical to reactor design, cannot be obtained from typical open-loop simulations.

  6. Computational Methods and Challenges for Large-Scale Circuit Mapping

    PubMed Central

    Helmstaedter, Moritz; Mitra, Partha

    2012-01-01

    Summary The connectivity architecture of neuronal circuits is essential to understand how brains work, yet our knowledge about the neuronal wiring diagrams remains limited and partial. Technical breakthroughs in labeling and imaging methods starting more than a century ago have advanced knowledge in the field. However, the volume of data associated with imaging a whole brain or a significant fraction thereof, with electron or light microscopy, has only recently become amenable to digital storage and analysis. A mouse brain imaged at light microscopic resolution is about a terabyte of data, and 1 mm3 of the brain at EM resolution is about half a petabyte. This has given rise to a new field of research, computational analysis of large scale neuroanatomical data sets, with goals that include reconstructions of the morphology of individual neurons as well as entire circuits. The problems encountered include large data management, segmentation and 3D reconstruction, computational geometry and workflow management allowing for hybrid approaches combining manual and algorithmic processing. Here we review this growing field of neuronal data analysis with emphasis on reconstructing neurons from EM data cubes. PMID:22221862

  7. ANALYSIS OF TURBULENT MIXING JETS IN LARGE SCALE TANK

    SciTech Connect

    Lee, S; Richard Dimenna, R; Robert Leishear, R; David Stefanko, D

    2007-03-28

    Flow evolution models were developed to evaluate the performance of the new advanced design mixer pump for sludge mixing and removal operations with high-velocity liquid jets in one of the large-scale Savannah River Site waste tanks, Tank 18. This paper describes the computational model, the flow measurements used to provide validation data in the region far from the jet nozzle, the extension of the computational results to real tank conditions through the use of existing sludge suspension data, and finally, the sludge removal results from actual Tank 18 operations. A computational fluid dynamics approach was used to simulate the sludge removal operations. The models employed a three-dimensional representation of the tank with a two-equation turbulence model. Both the computational approach and the models were validated with onsite test data reported here and literature data. The model was then extended to actual conditions in Tank 18 through a velocity criterion to predict the ability of the new pump design to suspend settled sludge. A qualitative comparison with sludge removal operations in Tank 18 showed a reasonably good comparison with final results subject to significant uncertainties in actual sludge properties.

  8. Food security through large scale investments in agriculture

    NASA Astrophysics Data System (ADS)

    Rulli, M.; D'Odorico, P.

    2013-12-01

    Most of the human appropriation of freshwater resources is for food production. There is some concern that in the near future the finite freshwater resources available on Earth might not be sufficient to meet the increasing human demand for agricultural products. In the late 1700s Malthus argued that in the long run the humanity would not have enough resources to feed itself. Malthus' analysis, however, did not account for the emergence of technological innovations that could increase the rate of food production. The modern and contemporary history has seen at least three major technological advances that have increased humans' access to food, namely, the industrial revolution, the green revolution, and the intensification of global trade. Here we argue that a fourth revolution has just started to happen. It involves foreign direct investments in agriculture, which intensify the crop yields of potentially highly productive agricultural lands by introducing the use of more modern technologies. The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions for commercial farming will bring the technology required to close the existing yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of verified land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with large scale land acquisitions. We

  9. Distribution probability of large-scale landslides in central Nepal

    NASA Astrophysics Data System (ADS)

    Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi

    2014-12-01

    Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.

  10. Large Scale Chemical Cross-linking Mass Spectrometry Perspectives

    PubMed Central

    Zybailov, Boris L.; Glazko, Galina V.; Jaiswal, Mihir; Raney, Kevin D.

    2014-01-01

    The spectacular heterogeneity of a complex protein mixture from biological samples becomes even more difficult to tackle when one’s attention is shifted towards different protein complex topologies, transient interactions, or localization of PPIs. Meticulous protein-by-protein affinity pull-downs and yeast-two-hybrid screens are the two approaches currently used to decipher proteome-wide interaction networks. Another method is to employ chemical cross-linking, which gives not only identities of interactors, but could also provide information on the sites of interactions and interaction interfaces. Despite significant advances in mass spectrometry instrumentation over the last decade, mapping Protein-Protein Interactions (PPIs) using chemical cross-linking remains time consuming and requires substantial expertise, even in the simplest of systems. While robust methodologies and software exist for the analysis of binary PPIs and also for the single protein structure refinement using cross-linking-derived constraints, undertaking a proteome-wide cross-linking study is highly complex. Difficulties include i) identifying cross-linkers of the right length and selectivity that could capture interactions of interest; ii) enrichment of the cross-linked species; iii) identification and validation of the cross-linked peptides and cross-linked sites. In this review we examine existing literature aimed at the large-scale protein cross-linking and discuss possible paths for improvement. We also discuss short-length cross-linkers of broad specificity such as formaldehyde and diazirine-based photo-cross-linkers. These cross-linkers could potentially capture many types of interactions, without strict requirement for a particular amino-acid to be present at a given protein-protein interface. How these shortlength, broad specificity cross-linkers be applied to proteome-wide studies? We will suggest specific advances in methodology, instrumentation and software that are needed to

  11. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    SciTech Connect

    Gerber, Richard A.; Wasserman, Harvey J.

    2012-03-02

    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  12. Large-scale atmospheric state and cloud/precipitation characteristics during MC3E

    NASA Astrophysics Data System (ADS)

    Jensen, M. P.; Kollias, P.; Giangrande, S. E.; Collis, S. M.; Petersen, W. A.; Xie, S.; Zhang, Y.

    2012-12-01

    The Midlatitude Continental Convective Clouds Experiment (MC3E), a joint field campaign of the DOE Atmospheric Radiation Measurement (ARM) Climate Research Facility and NASA's Global Precipitation Measurement (GPM) Mission, took place at the ARM Southern Great Plains site between April 22 and June 6, 2011. The major objective of this campaign was the collection of a comprehensive data set for the study of a variety of convective cloud/storm conditions targeting processes important for the parameterization of convection in large-scale models and the retrieval of precipitation from space-borne sensors over land. This poster presents analysis of representations of the large-scale atmospheric state derived from the MC3E sounding network, including both integrated convective indices and large-scale forcing fields (e.g., vertical velocity and advective tendency) derived using an advanced objective analysis approach, in conjunction with cloud and precipitation observations from MC3E radar systems and in-situ measurements.

  13. Chronic, Wireless Recordings of Large Scale Brain Activity in Freely Moving Rhesus Monkeys

    PubMed Central

    Schwarz, David A.; Lebedev, Mikhail A.; Hanson, Timothy L.; Dimitrov, Dragan F.; Lehew, Gary; Meloy, Jim; Rajangam, Sankaranarayani; Subramanian, Vivek; Ifft, Peter J.; Li, Zheng; Ramakrishnan, Arjun; Tate, Andrew; Zhuang, Katie; Nicolelis, Miguel A.L.

    2014-01-01

    Advances in techniques for recording large-scale brain activity contribute to both the elucidation of neurophysiological principles and the development of brain-machine interfaces (BMIs). Here we describe a neurophysiological paradigm for performing tethered and wireless large-scale recordings based on movable volumetric three-dimensional (3D) multielectrode implants. This approach allowed us to isolate up to 1,800 units per animal and simultaneously record the extracellular activity of close to 500 cortical neurons, distributed across multiple cortical areas, in freely behaving rhesus monkeys. The method is expandable, in principle, to thousands of simultaneously recorded channels. It also allows increased recording longevity (5 consecutive years), and recording of a broad range of behaviors, e.g. social interactions, and BMI paradigms in freely moving primates. We propose that wireless large-scale recordings could have a profound impact on basic primate neurophysiology research, while providing a framework for the development and testing of clinically relevant neuroprostheses. PMID:24776634

  14. Advanced manufacturing development of a composite empennage component for L-1011 aircraft

    NASA Technical Reports Server (NTRS)

    Alva, T.; Henkel, J.; Johnson, R.; Carll, B.; Jackson, A.; Mosesian, B.; Brozovic, R.; Obrien, R.; Eudaily, R.

    1982-01-01

    This is the final report of technical work conducted during the fourth phase of a multiphase program having the objective of the design, development and flight evaluation of an advanced composite empennage component manufactured in a production environment at a cost competitive with those of its metal counterpart, and at a weight savings of at least 20 percent. The empennage component selected for this program is the vertical fin box of the L-1011 aircraft. The box structure extends from the fuselage production joint to the tip rib and includes front and rear spars. During Phase 4 of the program, production quality tooling was designed and manufactured to produce three sets of covers, ribs, spars, miscellaneous parts, and subassemblies to assemble three complete ACVF units. Recurring and nonrecurring cost data were compiled and documented in the updated producibility/design to cost plan. Nondestruct inspections, quality control tests, and quality acceptance tests were performed in accordance with the quality assurance plan and the structural integrity control plan. Records were maintained to provide traceability of material and parts throughout the manufacturing development phase. It was also determined that additional tooling would not be required to support the current and projected L-1011 production rate.

  15. Large-scale simulations of layered double hydroxide nanocomposite materials

    NASA Astrophysics Data System (ADS)

    Thyveetil, Mary-Ann

    Layered double hydroxides (LDHs) have the ability to intercalate a multitude of anionic species. Atomistic simulation techniques such as molecular dynamics have provided considerable insight into the behaviour of these materials. We review these techniques and recent algorithmic advances which considerably improve the performance of MD applications. In particular, we discuss how the advent of high performance computing and computational grids has allowed us to explore large scale models with considerable ease. Our simulations have been heavily reliant on computational resources on the UK's NGS (National Grid Service), the US TeraGrid and the Distributed European Infrastructure for Supercomputing Applications (DEISA). In order to utilise computational grids we rely on grid middleware to launch, computationally steer and visualise our simulations. We have integrated the RealityGrid steering library into the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) 1 . which has enabled us to perform re mote computational steering and visualisation of molecular dynamics simulations on grid infrastruc tures. We also use the Application Hosting Environment (AHE) 2 in order to launch simulations on remote supercomputing resources and we show that data transfer rates between local clusters and super- computing resources can be considerably enhanced by using optically switched networks. We perform large scale molecular dynamics simulations of MgiAl-LDHs intercalated with either chloride ions or a mixture of DNA and chloride ions. The systems exhibit undulatory modes, which are suppressed in smaller scale simulations, caused by the collective thermal motion of atoms in the LDH layers. Thermal undulations provide elastic properties of the system including the bending modulus, Young's moduli and Poisson's ratios. To explore the interaction between LDHs and DNA. we use molecular dynamics techniques to per form simulations of double stranded, linear and plasmid DNA up

  16. Mechanisation of large-scale agricultural fields in developing countries - a review.

    PubMed

    Onwude, Daniel I; Abdulstter, Rafia; Gomes, Chandima; Hashim, Norhashila

    2016-09-01

    Mechanisation of large-scale agricultural fields often requires the application of modern technologies such as mechanical power, automation, control and robotics. These technologies are generally associated with relatively well developed economies. The application of these technologies in some developing countries in Africa and Asia is limited by factors such as technology compatibility with the environment, availability of resources to facilitate the technology adoption, cost of technology purchase, government policies, adequacy of technology and appropriateness in addressing the needs of the population. As a result, many of the available resources have been used inadequately by farmers, who continue to rely mostly on conventional means of agricultural production, using traditional tools and equipment in most cases. This has led to low productivity and high cost of production among others. Therefore this paper attempts to evaluate the application of present day technology and its limitations to the advancement of large-scale mechanisation in developing countries of Africa and Asia. Particular emphasis is given to a general understanding of the various levels of mechanisation, present day technology, its management and application to large-scale agricultural fields. This review also focuses on/gives emphasis to future outlook that will enable a gradual, evolutionary and sustainable technological change. The study concludes that large-scale-agricultural farm mechanisation for sustainable food production in Africa and Asia must be anchored on a coherent strategy based on the actual needs and priorities of the large-scale farmers. © 2016 Society of Chemical Industry.

  17. Reduced toxicity polyester resins and microvascular pre-preg tapes for advanced composites manufacturing

    NASA Astrophysics Data System (ADS)

    Poillucci, Richard

    Advanced composites manufacturing broadly encapsulates topics ranging from matrix chemistries to automated machines that lay-up fiber-reinforced materials. Environmental regulations are stimulating research to reduce matrix resin formulation toxicity. At present, composites fabricated with polyester resins expose workers to the risk of contact with and inhalation of styrene monomer, which is a potential carcinogen, neurotoxin, and respiratory irritant. The first primary goal of this thesis is to reduce the toxicity associated with polyester resins by: (1) identification of potential monomers to replace styrene, (2) determination of monomer solubility within the polyester, and (3) investigation of approaches to rapidly screen a large resin composition parameter space. Monomers are identified based on their ability to react with polyester and their toxicity as determined by the Globally Harmonized System (GHS) and a green screen method. Solubilities were determined by the Hoftyzer -- Van Krevelen method, Hansen solubility parameter database, and experimental mixing of monomers. A combinatorial microfluidic mixing device is designed and tested to obtain distinct resin compositions from two input chemistries. The push for safer materials is complemented by a thrust for multifunctional composites. The second primary goal of this thesis is to design and implement the manufacture of sacrificial fiber materials suitable for use in automated fiber placement of microvascaular multifunctional composites. Two key advancements are required to achieve this goal: (1) development of a roll-to-roll method to place sacrificial fibers onto carbon fiber pre-preg tape; and (2) demonstration of feasible manufacture of microvascular carbon fiber plates with automated fiber placement. An automated method for placing sacrificial fibers onto carbon fiber tapes is designed and a prototype implemented. Carbon fiber tows with manual placement of sacrificial fibers is implemented within an

  18. Learning networks for sustainable, large-scale improvement.

    PubMed

    McCannon, C Joseph; Perla, Rocco J

    2009-05-01

    Large-scale improvement efforts known as improvement networks offer structured opportunities for exchange of information and insights into the adaptation of clinical protocols to a variety of settings.

  19. Modified gravity and large scale flows, a review

    NASA Astrophysics Data System (ADS)

    Mould, Jeremy

    2017-02-01

    Large scale flows have been a challenging feature of cosmography ever since galaxy scaling relations came on the scene 40 years ago. The next generation of surveys will offer a serious test of the standard cosmology.

  20. Bayesian hierarchical model for large-scale covariance matrix estimation.

    PubMed

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  1. A study of MLFMA for large-scale scattering problems

    NASA Astrophysics Data System (ADS)

    Hastriter, Michael Larkin

    This research is centered in computational electromagnetics with a focus on solving large-scale problems accurately in a timely fashion using first principle physics. Error control of the translation operator in 3-D is shown. A parallel implementation of the multilevel fast multipole algorithm (MLFMA) was studied as far as parallel efficiency and scaling. The large-scale scattering program (LSSP), based on the ScaleME library, was used to solve ultra-large-scale problems including a 200lambda sphere with 20 million unknowns. As these large-scale problems were solved, techniques were developed to accurately estimate the memory requirements. Careful memory management is needed in order to solve these massive problems. The study of MLFMA in large-scale problems revealed significant errors that stemmed from inconsistencies in constants used by different parts of the algorithm. These were fixed to produce the most accurate data possible for large-scale surface scattering problems. Data was calculated on a missile-like target using both high frequency methods and MLFMA. This data was compared and analyzed to determine possible strategies to increase data acquisition speed and accuracy through multiple computation method hybridization.

  2. Satellite measurements of large-scale air pollution: Methods

    SciTech Connect

    Kaufman, Y.J.; Fraser, R.S.; Ferrare, R.A. )

    1990-06-20

    A method is presented for simultaneous determination of the aerosol optical thickness ({tau}{sub a}), particle size (r{sub m}, geometric mean mass radius for a lognormal distribution) and the single scattering albedo ({omega}{sub 0}, ratio between scattering and scattering + absorption) from satellite imagery. The method is based on satellite images of the surface (land and water) in the visible and near-IR bands and is applied here to the first two channels of the Advanced Very High Resolution Radiometer (AVHRR) sensor. The aerosol characteristics are obtained from the difference in the upward radiances, detected by the satellite, between a clear and a hazy day. Therefore the method is mainly useful for remote sensing of large-scale air pollution (e.g., smoke from a large fire or concentrated anthropogenic pollution), which introduces dense aerosol into the atmosphere (aerosol optical thickness {ge}0.4) on top of an existing aerosol. The method is very sensitive to the stability of the surface reflectance between the clear day and the hazy day. It also requires accurate satellite calibration (preferably not more than 5% error) and stable calibration with good relative values between the two bands used in the analysis. With these requirements, the aerosol optical thickness can be derived with an error of {Delta}{tau}{sub a} = 0.08-0.15. For an assumed lognormal size distribution, the particle geometrical mean mass radius r{sub m} can be derived (if good calibration is available) with an error of {Delta}r{sub m} = {plus minus}(0.10-0.20){mu}m, and {omega}{sub 0} with {Delta}{omega}{sub 0} = {plus minus}0.03 for {omega}{sub 0} close to 1 and {Delta}{sub omega}{sub 0} = {plus minus}(0.03-0.07) for {omega}{sub 0} about 0.8. The method was applied to AVHRR images of a forest fire smoke.

  3. ELECTRON PARAMAGNETIC RESONANCE DOSIMETRY FOR A LARGE-SCALE RADIATION INCIDENT

    PubMed Central

    Swartz, Harold M.; Flood, Ann Barry; Williams, Benjamin B.; Dong, Ruhong; Swarts, Steven G.; He, Xiaoming; Grinberg, Oleg; Sidabras, Jason; Demidenko, Eugene; Gui, Jiang; Gladstone, David J.; Jarvis, Lesley A.; Kmiec, Maciej M.; Kobayashi, Kyo; Lesniewski, Piotr N.; Marsh, Stephen D.P.; Matthews, Thomas P.; Nicolalde, Roberto J.; Pennington, Patrick M.; Raynolds, Timothy; Salikhov, Ildar; Wilcox, Dean E.; Zaki, Bassem I.

    2013-01-01

    With possibilities for radiation terrorism and intensified concerns about nuclear accidents since the recent Fukushima Daiichi event, the potential exposure of large numbers of individuals to radiation that could lead to acute clinical effects has become a major concern. For the medical community to cope with such an event and avoid overwhelming the medical care system, it is essential to identify not only individuals who have received clinically significant exposures and need medical intervention but also those who do not need treatment. The ability of electron paramagnetic resonance to measure radiation-induced paramagnetic species, which persist in certain tissues (e.g., teeth, fingernails, toenails, bone, and hair), has led this technique to become a prominent method for screening significantly exposed individuals. Although the technical requirements needed to develop this method for effective application in a radiation event are daunting, remarkable progress has been made. In collaboration with General Electric, and through funding committed by the Biomedical Advanced Research and Development Authority, electron paramagnetic resonance tooth dosimetry of the upper incisors is being developed to become a Food and Drug Administration-approved and manufacturable device designed to carry out triage for a threshold dose of 2 Gy. Significant progress has also been made in the development of electron paramagnetic resonance nail dosimetry based on measurements of nails in situ under point-of-care conditions, and in the near future this may become a second field-ready technique. Based on recent progress in measurements of nail clippings, we anticipate that this technique may be implementable at remotely located laboratories to provide additional information when the measurements of dose on site need to be supplemented. We conclude that electron paramagnetic resonance dosimetry is likely to be a useful part of triage for a large-scale radiation incident. PMID:22850230

  4. Scalable techniques for the analysis of large-scale materials data

    NASA Astrophysics Data System (ADS)

    Samudrala, Sai Kiranmayee

    Many physical systems of fundamental and industrial importance are significantly affected by the development of new materials. By establishing process-structure-property relationship one can design new, tailor-made materials that possess desired properties. Conventional experimental and analytical techniques like first-principle calculations, though accurate, are extremely tedious and resource-intensive resulting in a significant gap between the time of discovery of a new material and the time it is put to engineering practice. Furthermore, huge amounts of data produced by these techniques poses a tough challenge in terms of analysis. This thesis addresses the challenges in analyzing huge datasets by leveraging the advanced mathematical and computational techniques in order to establish process-structure-property relationship of materials. First of the three parts of this thesis describes application of dimensionality reduction (DR) techniques to analyze a dataset of apatites described in structural descriptor space. This data reveals interesting correlations between structural descriptors like ionic radius and covalence with characteristic properties like apatite stability; information crucial to promote the use of apatites as an antidote in lead poisoning. Second part of the thesis describes a parallel spectral DR framework that can process thousands of points lying in a million dimensional space, which is beyond the reach of currently available tools. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify the optimal processing parameters. Third significant approach discussed in this thesis includes applying well-studied graph-theoretic methods to analyze large datasets produced from Atom Probe Tomography (APT) to quantify the morphology of precipitates in a solvent material. The above three mathematical models

  5. Isotope Separation and Advanced Manufacturing Technology. ISAM semiannual report, Volume 3, Number 1, October 1993--March 1994

    SciTech Connect

    Carpenter, J.; Kan, T.

    1994-10-01

    This is the fourth issue of a semiannual report for the Isotope Separation and Advanced Materials Manufacturing (ISAM) Technology Program at Lawrence Livermore National Laboratory. Primary objectives include: (I) the Uranium Atomic Vapor Laser Isotope Separation (UAVLIS) process, which is being developed and prepared for deployment as an advanced uranium enrichment capability; (II) Advanced manufacturing technologies, which include industrial laser and E-beam material processing and new manufacturing technologies for uranium, plutonium, and other strategically important materials in support of DOE and other national applications. This report features progress in the ISAM Program from October 1993 through March 1994. Selected papers were indexed separately for inclusion in the Energy Science and Technology Database.

  6. Advanced surface chemical analysis of continuously manufactured drug loaded composite pellets.

    PubMed

    Hossain, Akter; Nandi, Uttom; Fule, Ritesh; Nokhodchi, Ali; Maniruzzaman, Mohammed

    2017-04-15

    The aim of the present study was to develop and characterise polymeric composite pellets by means of continuous melt extrusion techniques. Powder blends of a steroid hormone (SH) as a model drug and either ethyl cellulose (EC N10 and EC P7 grades) or hydroxypropyl methylcellulose (HPMC AS grade) as polymeric carrier were extruded using a Pharma 11mm twin screw extruder in a continuous mode of operation to manufacture extruded composite pellets of 1mm length. Molecular modelling study using commercial Gaussian 09 software outlined a possible drug-polymer interaction in the molecular level to develop solid dispersions of the drug in the pellets. Solid-state analysis conducted via a differential scanning calorimetry (DSC), hot stage microscopy (HSM) and X-ray powder diffraction (XRPD) analyses revealed the amorphous state of the drug in the polymer matrices. Surface analysis using SEM/energy dispersive X-ray (EDX) of the produced pellets arguably showed a homogenous distribution of the C and O atoms in the pellet matrices. Moreover, advanced chemical surface analysis conducted via atomic force microscopy (AFM) showed a homogenous phase system having the drug molecule dispersed onto the amorphous matrices while Raman mapping confirmed the homogenous single-phase drug distribution in the manufactured composite pellets. Such composite pellets are expected to deliver multidisciplinary applications in drug delivery and medical sciences by e.g. modifying drug solubility/dissolutions or stabilizing the unstable drug (e.g. hormone, protein) in the composite network.

  7. Analysis of advanced vapor source for cadmium telluride solar cell manufacturing

    NASA Astrophysics Data System (ADS)

    Khetani, Tejas Harshadkumar

    A thin film CdS/CdTe solar cell manufacturing line has been developed in the Materials Engineering Laboratory at Colorado State University. The original design incorporated infrared lamps for heating the vapor source. This system has been redesigned to improve the energy efficiency of the system, allow co-sublimation and allow longer run time before the sources have to be replenished. The advanced vapor source incorporates conduction heating with heating elements embedded in graphite. The advanced vapor source was modeled by computational fluid dynamics (CFD). From these models, the required maximum operating temperature of the element was determined to be 720 C for the processing of CdS/CdTe solar cells. Nichrome and Kanthal A1 were primarily selected for this application at temperature of 720 °C in vacuum with oxygen partial pressure. Research on oxidation effects and life due to oxidation as well as creep deformation was done, and Nichrome was found more suitable for this application. A study of the life of the Nichrome heating elements in this application was conducted and the estimate of life is approximately 1900 years for repeated on-off application. This is many orders of magnitude higher than the life of infrared heat lamps. Ceramic cement based on aluminum oxide (Resbond 920) is used for bonding the elements to the graphite. Thermodynamic calculations showed that this cement is inert to the heating element. An earlier design of the advanced source encountered failure of the element. The failed element was studies by scanning electron microscopy and the failure was attributed to loss of adhesion between the graphite and the ceramic element. The design has been modified and the advanced vapor source is currently in operation.

  8. Recursive architecture for large-scale adaptive system

    NASA Astrophysics Data System (ADS)

    Hanahara, Kazuyuki; Sugiyama, Yoshihiko

    1994-09-01

    'Large scale' is one of major trends in the research and development of recent engineering, especially in the field of aerospace structural system. This term expresses the large scale of an artifact in general, however, it also implies the large number of the components which make up the artifact in usual. Considering a large scale system which is especially used in remote space or deep-sea, such a system should be adaptive as well as robust by itself, because its control as well as maintenance by human operators are not easy due to the remoteness. An approach to realizing this large scale, adaptive and robust system is to build the system as an assemblage of components which are respectively adaptive by themselves. In this case, the robustness of the system can be achieved by using a large number of such components and suitable adaptation as well as maintenance strategies. Such a system gathers many research's interest and their studies such as decentralized motion control, configurating algorithm and characteristics of structural elements are reported. In this article, a recursive architecture concept is developed and discussed towards the realization of large scale system which consists of a number of uniform adaptive components. We propose an adaptation strategy based on the architecture and its implementation by means of hierarchically connected processing units. The robustness and the restoration from degeneration of the processing unit are also discussed. Two- and three-dimensional adaptive truss structures are conceptually designed based on the recursive architecture.

  9. EINSTEIN'S SIGNATURE IN COSMOLOGICAL LARGE-SCALE STRUCTURE

    SciTech Connect

    Bruni, Marco; Hidalgo, Juan Carlos; Wands, David

    2014-10-10

    We show how the nonlinearity of general relativity generates a characteristic nonGaussian signal in cosmological large-scale structure that we calculate at all perturbative orders in a large-scale limit. Newtonian gravity and general relativity provide complementary theoretical frameworks for modeling large-scale structure in ΛCDM cosmology; a relativistic approach is essential to determine initial conditions, which can then be used in Newtonian simulations studying the nonlinear evolution of the matter density. Most inflationary models in the very early universe predict an almost Gaussian distribution for the primordial metric perturbation, ζ. However, we argue that it is the Ricci curvature of comoving-orthogonal spatial hypersurfaces, R, that drives structure formation at large scales. We show how the nonlinear relation between the spatial curvature, R, and the metric perturbation, ζ, translates into a specific nonGaussian contribution to the initial comoving matter density that we calculate for the simple case of an initially Gaussian ζ. Our analysis shows the nonlinear signature of Einstein's gravity in large-scale structure.

  10. Modulation analysis of large-scale discrete vortices.

    PubMed

    Cisneros, Luis A; Minzoni, Antonmaria A; Panayotaros, Panayotis; Smyth, Noel F

    2008-09-01

    The behavior of large-scale vortices governed by the discrete nonlinear Schrödinger equation is studied. Using a discrete version of modulation theory, it is shown how vortices are trapped and stabilized by the self-consistent Peierls-Nabarro potential that they generate in the lattice. Large-scale circular and polygonal vortices are studied away from the anticontinuum limit, which is the limit considered in previous studies. In addition numerical studies are performed on large-scale, straight structures, and it is found that they are stabilized by a nonconstant mean level produced by standing waves generated at the ends of the structure. Finally, numerical evidence is produced for long-lived, localized, quasiperiodic structures.

  11. Large-scale simulations of complex physical systems

    NASA Astrophysics Data System (ADS)

    Belić, A.

    2007-04-01

    Scientific computing has become a tool as vital as experimentation and theory for dealing with scientific challenges of the twenty-first century. Large scale simulations and modelling serve as heuristic tools in a broad problem-solving process. High-performance computing facilities make possible the first step in this process - a view of new and previously inaccessible domains in science and the building up of intuition regarding the new phenomenology. The final goal of this process is to translate this newly found intuition into better algorithms and new analytical results. In this presentation we give an outline of the research themes pursued at the Scientific Computing Laboratory of the Institute of Physics in Belgrade regarding large-scale simulations of complex classical and quantum physical systems, and present recent results obtained in the large-scale simulations of granular materials and path integrals.

  12. Large-scale velocity structures in turbulent thermal convection.

    PubMed

    Qiu, X L; Tong, P

    2001-09-01

    A systematic study of large-scale velocity structures in turbulent thermal convection is carried out in three different aspect-ratio cells filled with water. Laser Doppler velocimetry is used to measure the velocity profiles and statistics over varying Rayleigh numbers Ra and at various spatial positions across the whole convection cell. Large velocity fluctuations are found both in the central region and near the cell boundary. Despite the large velocity fluctuations, the flow field still maintains a large-scale quasi-two-dimensional structure, which rotates in a coherent manner. This coherent single-roll structure scales with Ra and can be divided into three regions in the rotation plane: (1) a thin viscous boundary layer, (2) a fully mixed central core region with a constant mean velocity gradient, and (3) an intermediate plume-dominated buffer region. The experiment reveals a unique driving mechanism for the large-scale coherent rotation in turbulent convection.

  13. Acoustic Studies of the Large Scale Ocean Circulation

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris

    1999-01-01

    Detailed knowledge of ocean circulation and its transport properties is prerequisite to an understanding of the earth's climate and of important biological and chemical cycles. Results from two recent experiments, THETIS-2 in the Western Mediterranean and ATOC in the North Pacific, illustrate the use of ocean acoustic tomography for studies of the large scale circulation. The attraction of acoustic tomography is its ability to sample and average the large-scale oceanic thermal structure, synoptically, along several sections, and at regular intervals. In both studies, the acoustic data are compared to, and then combined with, general circulation models, meteorological analyses, satellite altimetry, and direct measurements from ships. Both studies provide complete regional descriptions of the time-evolving, three-dimensional, large scale circulation, albeit with large uncertainties. The studies raise serious issues about existing ocean observing capability and provide guidelines for future efforts.

  14. A relativistic signature in large-scale structure

    NASA Astrophysics Data System (ADS)

    Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David

    2016-09-01

    In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.

  15. Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows

    SciTech Connect

    Boehm, Swen; Elwasif, Wael R; Naughton, III, Thomas J; Vallee, Geoffroy R

    2014-01-01

    High-performance computing (HPC) workloads are increasingly leveraging loosely coupled large scale simula- tions. Unfortunately, most large-scale HPC platforms, including Cray/ALPS environments, are designed for the execution of long-running jobs based on coarse-grained launch capabilities (e.g., one MPI rank per core on all allocated compute nodes). This assumption limits capability-class workload campaigns that require large numbers of discrete or loosely coupled simulations, and where time-to-solution is an untenable pacing issue. This paper describes the challenges related to the support of fine-grained launch capabilities that are necessary for the execution of loosely coupled large scale simulations on Cray/ALPS platforms. More precisely, we present the details of an enhanced runtime system to support this use case, and report on initial results from early testing on systems at Oak Ridge National Laboratory.

  16. PKI security in large-scale healthcare networks.

    PubMed

    Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos

    2012-06-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.

  17. Operating experience and multi-fuel capability of large-scale CFB boilers

    SciTech Connect

    Cleve, K.; Smith, T.V.

    1997-12-31

    Large scale (250 MW{sub e}) circulating fluidized bed (CFB) boilers capable of effectively utilising a wide range of low grade fuels in an environmentally acceptable manner are now a well proven and reliable technology. Development of this technology and innovative design features continues and three plants - each in their own way representing a significant advance - are discussed. Key technical features and operating experience including availability are reviewed. Fuel variability and test data are also presented. 9 figs.

  18. Manufacturing Methods and Technology (MANTECH) Program Manufacturing Techniques for a Composite Tail Section for the Advanced Attack Helicopter.

    DTIC Science & Technology

    1981-10-01

    c.l-.kj tnk i r. ib sized to accommodate the local load requircmcnts. I .-,kl, thlckrv. , including laminae and honeycomb core is illuatr1tt1 . t, Vi...occur as a result of manufacturing defects, storage , environmental conditions, in-service conditions, and production techniques. Specific areas of concern...Antenna inopedance, pattern, and cain were ineasured. Ic .- .: te study was to determine if thc eistina 1NI t" 1 \\ ertica --l :JL. r I .:,ilin, Edge

  19. Latest advances in the manufacturing of 3D rechargeable lithium microbatteries

    NASA Astrophysics Data System (ADS)

    Ferrari, Stefania; Loveridge, Melanie; Beattie, Shane D.; Jahn, Marcus; Dashwood, Richard J.; Bhagat, Rohit

    2015-07-01

    Recent advances in micro- and nano-electromechanical systems (MEMS/NEMS) technology have led to a niche industry of diverse small-scale devices that include microsensors, micromachines and drug-delivery systems. For these devices, there is an urgent need to develop Micro Lithium Ion Batteries (MLIBs) with dimensions on the scale 1-10 mm3 enabling on-board power delivery. Unfortunately, power limitations are inherent in planar 2D cells and only the advent of 3D designs and microarchitectures will lead to a real breakthrough in the microbattery technology. During the last few years, many efforts to optimise MLIBs were discussed in literature, both in the planar and 3D configurations. This review highlights the importance of 3D microarchitectured electrodes to fabricate batteries that can be device-integrated with exceptionally high specific power density coupled with exquisite miniaturisation. A wide literature overview is provided and recent advances in manufacturing routes to 3D-MLIBs comprising materials synthesis, device formulation, device testing are herein discussed. The advent of simple, economic and easily scalable fabrication processes such as 3D printing will have a decisive role in the growing field of micropower sources and microdevices.

  20. Magnetic Helicity and Large Scale Magnetic Fields: A Primer

    NASA Astrophysics Data System (ADS)

    Blackman, Eric G.

    2015-05-01

    Magnetic fields of laboratory, planetary, stellar, and galactic plasmas commonly exhibit significant order on large temporal or spatial scales compared to the otherwise random motions within the hosting system. Such ordered fields can be measured in the case of planets, stars, and galaxies, or inferred indirectly by the action of their dynamical influence, such as jets. Whether large scale fields are amplified in situ or a remnant from previous stages of an object's history is often debated for objects without a definitive magnetic activity cycle. Magnetic helicity, a measure of twist and linkage of magnetic field lines, is a unifying tool for understanding large scale field evolution for both mechanisms of origin. Its importance stems from its two basic properties: (1) magnetic helicity is typically better conserved than magnetic energy; and (2) the magnetic energy associated with a fixed amount of magnetic helicity is minimized when the system relaxes this helical structure to the largest scale available. Here I discuss how magnetic helicity has come to help us understand the saturation of and sustenance of large scale dynamos, the need for either local or global helicity fluxes to avoid dynamo quenching, and the associated observational consequences. I also discuss how magnetic helicity acts as a hindrance to turbulent diffusion of large scale fields, and thus a helper for fossil remnant large scale field origin models in some contexts. I briefly discuss the connection between large scale fields and accretion disk theory as well. The goal here is to provide a conceptual primer to help the reader efficiently penetrate the literature.

  1. Improvement of process control using wafer geometry for enhanced manufacturability of advanced semiconductor devices

    NASA Astrophysics Data System (ADS)

    Lee, Honggoo; Lee, Jongsu; Kim, Sang Min; Lee, Changhwan; Han, Sangjun; Kim, Myoungsoo; Kwon, Wontaik; Park, Sung-Ki; Vukkadala, Pradeep; Awasthi, Amartya; Kim, J. H.; Veeraraghavan, Sathish; Choi, DongSub; Huang, Kevin; Dighe, Prasanna; Lee, Cheouljung; Byeon, Jungho; Dey, Soham; Sinha, Jaydeep

    2015-03-01

    Aggressive advancements in semiconductor technology have resulted in integrated chip (IC) manufacturing capability at sub-20nm half-pitch nodes. With this, lithography overlay error budgets are becoming increasingly stringent. The delay in EUV lithography readiness for high volume manufacturing (HVM) and the need for multiple-patterning lithography with 193i technology has further amplified the overlay issue. Thus there exists a need for technologies that can improve overlay errors in HVM. The traditional method for reducing overlay errors predominantly focused on improving lithography scanner printability performance. However, processes outside of the lithography sector known as processinduced overlay errors can contribute significantly to the total overlay at the current requirements. Monitoring and characterizing process-induced overlay has become critical for advanced node patterning. Recently a relatively new technique for overlay control that uses high-resolution wafer geometry measurements has gained significance. In this work we present the implementation of this technique in an IC fabrication environment to monitor wafer geometry changes induced across several points in the process flow, of multiple product layers with critical overlay performance requirement. Several production wafer lots were measured and analyzed on a patterned wafer geometry tool. Changes induced in wafer geometry as a result of wafer processing were related to down-stream overlay error contribution using the analytical in-plane distortion (IPD) calculation model. Through this segmentation, process steps that are major contributors to down-stream overlay were identified. Subsequent process optimization was then isolated to those process steps where maximum benefit might be realized. Root-cause for the within-wafer, wafer-to-wafer, tool-to-tool, and station-to-station variations observed were further investigated using local shape curvature changes - which is directly related to

  2. HiQuant: Rapid Postquantification Analysis of Large-Scale MS-Generated Proteomics Data.

    PubMed

    Bryan, Kenneth; Jarboui, Mohamed-Ali; Raso, Cinzia; Bernal-Llinares, Manuel; McCann, Brendan; Rauch, Jens; Boldt, Karsten; Lynn, David J

    2016-06-03

    Recent advances in mass-spectrometry-based proteomics are now facilitating ambitious large-scale investigations of the spatial and temporal dynamics of the proteome; however, the increasing size and complexity of these data sets is overwhelming current downstream computational methods, specifically those that support the postquantification analysis pipeline. Here we present HiQuant, a novel application that enables the design and execution of a postquantification workflow, including common data-processing steps, such as assay normalization and grouping, and experimental replicate quality control and statistical analysis. HiQuant also enables the interpretation of results generated from large-scale data sets by supporting interactive heatmap analysis and also the direct export to Cytoscape and Gephi, two leading network analysis platforms. HiQuant may be run via a user-friendly graphical interface and also supports complete one-touch automation via a command-line mode. We evaluate HiQuant's performance by analyzing a large-scale, complex interactome mapping data set and demonstrate a 200-fold improvement in the execution time over current methods. We also demonstrate HiQuant's general utility by analyzing proteome-wide quantification data generated from both a large-scale public tyrosine kinase siRNA knock-down study and an in-house investigation into the temporal dynamics of the KSR1 and KSR2 interactomes. Download HiQuant, sample data sets, and supporting documentation at http://hiquant.primesdb.eu .

  3. Large scale purification of RNA nanoparticles by preparative ultracentrifugation.

    PubMed

    Jasinski, Daniel L; Schwartz, Chad T; Haque, Farzin; Guo, Peixuan

    2015-01-01

    Purification of large quantities of supramolecular RNA complexes is of paramount importance due to the large quantities of RNA needed and the purity requirements for in vitro and in vivo assays. Purification is generally carried out by liquid chromatography (HPLC), polyacrylamide gel electrophoresis (PAGE), or agarose gel electrophoresis (AGE). Here, we describe an efficient method for the large-scale purification of RNA prepared by in vitro transcription using T7 RNA polymerase by cesium chloride (CsCl) equilibrium density gradient ultracentrifugation and the large-scale purification of RNA nanoparticles by sucrose gradient rate-zonal ultracentrifugation or cushioned sucrose gradient rate-zonal ultracentrifugation.

  4. The Evolution of Baryons in Cosmic Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Snedden, Ali; Arielle Phillips, Lara; Mathews, Grant James; Coughlin, Jared; Suh, In-Saeng; Bhattacharya, Aparna

    2015-01-01

    The environments of galaxies play a critical role in their formation and evolution. We study these environments using cosmological simulations with star formation and supernova feedback included. From these simulations, we parse the large scale structure into clusters, filaments and voids using a segmentation algorithm adapted from medical imaging. We trace the star formation history, gas phase and metal evolution of the baryons in the intergalactic medium as function of structure. We find that our algorithm reproduces the baryon fraction in the intracluster medium and that the majority of star formation occurs in cold, dense filaments. We present the consequences this large scale environment has for galactic halos and galaxy evolution.

  5. [Issues of large scale tissue culture of medicinal plant].

    PubMed

    Lv, Dong-Mei; Yuan, Yuan; Zhan, Zhi-Lai

    2014-09-01

    In order to increase the yield and quality of the medicinal plant and enhance the competitive power of industry of medicinal plant in our country, this paper analyzed the status, problem and countermeasure of the tissue culture of medicinal plant on large scale. Although the biotechnology is one of the most efficient and promising means in production of medicinal plant, it still has problems such as stability of the material, safety of the transgenic medicinal plant and optimization of cultured condition. Establishing perfect evaluation system according to the characteristic of the medicinal plant is the key measures to assure the sustainable development of the tissue culture of medicinal plant on large scale.

  6. Large-Scale Graph Processing Analysis using Supercomputer Cluster

    NASA Astrophysics Data System (ADS)

    Vildario, Alfrido; Fitriyani; Nugraha Nurkahfi, Galih

    2017-01-01

    Graph implementation is widely use in various sector such as automotive, traffic, image processing and many more. They produce graph in large-scale dimension, cause the processing need long computational time and high specification resources. This research addressed the analysis of implementation large-scale graph using supercomputer cluster. We impelemented graph processing by using Breadth-First Search (BFS) algorithm with single destination shortest path problem. Parallel BFS implementation with Message Passing Interface (MPI) used supercomputer cluster at High Performance Computing Laboratory Computational Science Telkom University and Stanford Large Network Dataset Collection. The result showed that the implementation give the speed up averages more than 30 times and eficiency almost 90%.

  7. Corridors Increase Plant Species Richness at Large Scales

    SciTech Connect

    Damschen, Ellen I.; Haddad, Nick M.; Orrock,John L.; Tewksbury, Joshua J.; Levey, Douglas J.

    2006-09-01

    Habitat fragmentation is one of the largest threats to biodiversity. Landscape corridors, which are hypothesized to reduce the negative consequences of fragmentation, have become common features of ecological management plans worldwide. Despite their popularity, there is little evidence documenting the effectiveness of corridors in preserving biodiversity at large scales. Using a large-scale replicated experiment, we showed that habitat patches connected by corridors retain more native plant species than do isolated patches, that this difference increases over time, and that corridors do not promote invasion by exotic species. Our results support the use of corridors in biodiversity conservation.

  8. Clearing and Labeling Techniques for Large-Scale Biological Tissues

    PubMed Central

    Seo, Jinyoung; Choe, Minjin; Kim, Sung-Yon

    2016-01-01

    Clearing and labeling techniques for large-scale biological tissues enable simultaneous extraction of molecular and structural information with minimal disassembly of the sample, facilitating the integration of molecular, cellular and systems biology across different scales. Recent years have witnessed an explosive increase in the number of such methods and their applications, reflecting heightened interest in organ-wide clearing and labeling across many fields of biology and medicine. In this review, we provide an overview and comparison of existing clearing and labeling techniques and discuss challenges and opportunities in the investigations of large-scale biological systems. PMID:27239813

  9. Lightweighting Automotive Materials for Increased Fuel Efficiency and Delivering Advanced Modeling and Simulation Capabilities to U.S. Manufacturers

    SciTech Connect

    Hale, Steve

    2013-09-11

    Abstract The National Center for Manufacturing Sciences (NCMS) worked with the U.S. Department of Energy (DOE), National Energy Technology Laboratory (NETL), to bring together research and development (R&D) collaborations to develop and accelerate the knowledgebase and infrastructure for lightweighting materials and manufacturing processes for their use in structural and applications in the automotive sector. The purpose/importance of this DOE program: • 2016 CAFÉ standards. • Automotive industry technology that shall adopt the insertion of lightweighting material concepts towards manufacturing of production vehicles. • Development and manufacture of advanced research tools for modeling and simulation (M&S) applications to reduce manufacturing and material costs. • U.S. competitiveness that will help drive the development and manufacture of the next generation of materials. NCMS established a focused portfolio of applied R&D projects utilizing lightweighting materials for manufacture into automotive structures and components. Areas that were targeted in this program: • Functionality of new lightweighting materials to meet present safety requirements. • Manufacturability using new lightweighting materials. • Cost reduction for the development and use of new lightweighting materials. The automotive industry’s future continuously evolves through innovation, and lightweight materials are key in achieving a new era of lighter, more efficient vehicles. Lightweight materials are among the technical advances needed to achieve fuel/energy efficiency and reduce carbon dioxide (CO2) emissions: • Establish design criteria methodology to identify the best materials for lightweighting. • Employ state-of-the-art design tools for optimum material development for their specific applications. • Match new manufacturing technology to production volume. • Address new process variability with new production-ready processes.

  10. Large Scale Computing and Storage Requirements for High Energy Physics

    SciTech Connect

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  11. Symmetry-guided large-scale shell-model theory

    NASA Astrophysics Data System (ADS)

    Launey, Kristina D.; Dytrych, Tomas; Draayer, Jerry P.

    2016-07-01

    In this review, we present a symmetry-guided strategy that utilizes exact as well as partial symmetries for enabling a deeper understanding of and advancing ab initio studies for determining the microscopic structure of atomic nuclei. These symmetries expose physically relevant degrees of freedom that, for large-scale calculations with QCD-inspired interactions, allow the model space size to be reduced through a very structured selection of the basis states to physically relevant subspaces. This can guide explorations of simple patterns in nuclei and how they emerge from first principles, as well as extensions of the theory beyond current limitations toward heavier nuclei and larger model spaces. This is illustrated for the ab initio symmetry-adapted no-core shell model (SA-NCSM) and two significant underlying symmetries, the symplectic Sp(3 , R) group and its deformation-related SU(3) subgroup. We review the broad scope of nuclei, where these symmetries have been found to play a key role-from the light p-shell systems, such as 6Li, 8B, 8Be, 12C, and 16O, and sd-shell nuclei exemplified by 20Ne, based on first-principle explorations; through the Hoyle state in 12C and enhanced collectivity in intermediate-mass nuclei, within a no-core shell-model perspective; up to strongly deformed species of the rare-earth and actinide regions, as investigated in earlier studies. A complementary picture, driven by symmetries dual to Sp(3 , R) , is also discussed. We briefly review symmetry-guided techniques that prove useful in various nuclear-theory models, such as Elliott model, ab initio SA-NCSM, symplectic model, pseudo- SU(3) and pseudo-symplectic models, ab initio hyperspherical harmonics method, ab initio lattice effective field theory, exact pairing-plus-shell model approaches, and cluster models, including the resonating-group method. Important implications of these approaches that have deepened our understanding of emergent phenomena in nuclei, such as enhanced

  12. Large-scale search for dark-matter axions

    SciTech Connect

    Kinion, D; van Bibber, K

    2000-08-30

    We review the status of two ongoing large-scale searches for axions which may constitute the dark matter of our Milky Way halo. The experiments are based on the microwave cavity technique proposed by Sikivie, and marks a ''second-generation'' to the original experiments performed by the Rochester-Brookhaven-Fermilab collaboration, and the University of Florida group.

  13. Measurement, Sampling, and Equating Errors in Large-Scale Assessments

    ERIC Educational Resources Information Center

    Wu, Margaret

    2010-01-01

    In large-scale assessments, such as state-wide testing programs, national sample-based assessments, and international comparative studies, there are many steps involved in the measurement and reporting of student achievement. There are always sources of inaccuracies in each of the steps. It is of interest to identify the source and magnitude of…

  14. Resilience of Florida Keys coral communities following large scale disturbances

    EPA Science Inventory

    The decline of coral reefs in the Caribbean over the last 40 years has been attributed to multiple chronic stressors and episodic large-scale disturbances. This study assessed the resilience of coral communities in two different regions of the Florida Keys reef system between 199...

  15. Large-Scale Machine Learning for Classification and Search

    ERIC Educational Resources Information Center

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  16. Efficient On-Demand Operations in Large-Scale Infrastructures

    ERIC Educational Resources Information Center

    Ko, Steven Y.

    2009-01-01

    In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…

  17. Assuring Quality in Large-Scale Online Course Development

    ERIC Educational Resources Information Center

    Parscal, Tina; Riemer, Deborah

    2010-01-01

    Student demand for online education requires colleges and universities to rapidly expand the number of courses and programs offered online while maintaining high quality. This paper outlines two universities respective processes to assure quality in large-scale online programs that integrate instructional design, eBook custom publishing, Quality…

  18. Large-Scale Assessments and Educational Policies in Italy

    ERIC Educational Resources Information Center

    Damiani, Valeria

    2016-01-01

    Despite Italy's extensive participation in most large-scale assessments, their actual influence on Italian educational policies is less easy to identify. The present contribution aims at highlighting and explaining reasons for the weak and often inconsistent relationship between international surveys and policy-making processes in Italy.…

  19. Improving the Utility of Large-Scale Assessments in Canada

    ERIC Educational Resources Information Center

    Rogers, W. Todd

    2014-01-01

    Principals and teachers do not use large-scale assessment results because the lack of distinct and reliable subtests prevents identifying strengths and weaknesses of students and instruction, the results arrive too late to be used, and principals and teachers need assistance to use the results to improve instruction so as to improve student…

  20. Current Scientific Issues in Large Scale Atmospheric Dynamics

    NASA Technical Reports Server (NTRS)

    Miller, T. L. (Compiler)

    1986-01-01

    Topics in large scale atmospheric dynamics are discussed. Aspects of atmospheric blocking, the influence of transient baroclinic eddies on planetary-scale waves, cyclogenesis, the effects of orography on planetary scale flow, small scale frontal structure, and simulations of gravity waves in frontal zones are discussed.

  1. Large-Scale Innovation and Change in UK Higher Education

    ERIC Educational Resources Information Center

    Brown, Stephen

    2013-01-01

    This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…

  2. Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround

    ERIC Educational Resources Information Center

    Peurach, Donald J.; Neumerski, Christine M.

    2015-01-01

    The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state education…

  3. Large-Scale Environmental Influences on Aquatic Animal Health

    EPA Science Inventory

    In the latter portion of the 20th century, North America experienced numerous large-scale mortality events affecting a broad diversity of aquatic animals. Short-term forensic investigations of these events have sometimes characterized a causative agent or condition, but have rare...

  4. A bibliographical surveys of large-scale systems

    NASA Technical Reports Server (NTRS)

    Corliss, W. R.

    1970-01-01

    A limited, partly annotated bibliography was prepared on the subject of large-scale system control. Approximately 400 references are divided into thirteen application areas, such as large societal systems and large communication systems. A first-author index is provided.

  5. Probabilistic Cuing in Large-Scale Environmental Search

    ERIC Educational Resources Information Center

    Smith, Alastair D.; Hood, Bruce M.; Gilchrist, Iain D.

    2010-01-01

    Finding an object in our environment is an important human ability that also represents a critical component of human foraging behavior. One type of information that aids efficient large-scale search is the likelihood of the object being in one location over another. In this study we investigated the conditions under which individuals respond to…

  6. Newton Methods for Large Scale Problems in Machine Learning

    ERIC Educational Resources Information Center

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  7. Large-Scale Physical Separation of Depleted Uranium from Soil

    DTIC Science & Technology

    2012-09-01

    ER D C/ EL T R -1 2 - 2 5 Army Range Technology Program Large-Scale Physical Separation of Depleted Uranium from Soil E nv ir on m en ta l...Separation ................................................................................................................ 2   Project Background...5  2   Materials and Methods

  8. Computational Complexity, Efficiency and Accountability in Large Scale Teleprocessing Systems.

    DTIC Science & Technology

    1980-12-01

    COMPLEXITY, EFFICIENCY AND ACCOUNTABILITY IN LARGE SCALE TELEPROCESSING SYSTEMS DAAG29-78-C-0036 STANFORD UNIVERSITY JOHN T. GILL MARTIN E. BELLMAN...solve but easy to check. Ve have also suggested howy sucb random tapes can be simulated by determin- istically generating "pseudorandom" numbers by a

  9. Large-scale silicon optical switches for optical interconnection

    NASA Astrophysics Data System (ADS)

    Qiao, Lei; Tang, Weijie; Chu, Tao

    2016-11-01

    Large-scale optical switches are greatly demanded in building optical interconnections in data centers and high performance computers (HPCs). Silicon optical switches have advantages of being compact and CMOS process compatible, which can be easily monolithically integrated. However, there are difficulties to construct large ports silicon optical switches. One of them is the non-uniformity of the switch units in large scale silicon optical switches, which arises from the fabrication error and causes confusion in finding the unit optimum operation points. In this paper, we proposed a method to detect the optimum operating point in large scale switch with limited build-in power monitors. We also propose methods for improving the unbalanced crosstalk of cross/bar states in silicon electro-optical MZI switches and insertion losses. Our recent progress in large scale silicon optical switches, including 64 × 64 thermal-optical and 32 × 32 electro-optical switches will be introduced. To the best our knowledge, both of them are the largest scale silicon optical switches in their sections, respectively. The switches were fabricated on 340-nm SOI substrates with CMOS 180- nm processes. The crosstalk of the 32 × 32 electro-optic switch was -19.2dB to -25.1 dB, while the value of the 64 × 64 thermal-optic switch was -30 dB to -48.3 dB.

  10. The large scale microwave background anisotropy in decaying particle cosmology

    SciTech Connect

    Panek, M.

    1987-06-01

    We investigate the large-scale anisotropy of the microwave background radiation in cosmological models with decaying particles. The observed value of the quadrupole moment combined with other constraints gives an upper limit on the redshift of the decay z/sub d/ < 3-5. 12 refs., 2 figs.

  11. DESIGN OF LARGE-SCALE AIR MONITORING NETWORKS

    EPA Science Inventory

    The potential effects of air pollution on human health have received much attention in recent years. In the U.S. and other countries, there are extensive large-scale monitoring networks designed to collect data to inform the public of exposure risks to air pollution. A major crit...

  12. Large Scale Survey Data in Career Development Research

    ERIC Educational Resources Information Center

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  13. The Large-Scale Structure of Scientific Method

    ERIC Educational Resources Information Center

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  14. Ecosystem resilience despite large-scale altered hydro climatic conditions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Climate change is predicted to increase both drought frequency and duration, and when coupled with substantial warming, will establish a new hydroclimatological paradigm for many regions. Large-scale, warm droughts have recently impacted North America, Africa, Europe, Amazonia, and Australia result...

  15. US National Large-scale City Orthoimage Standard Initiative

    USGS Publications Warehouse

    Zhou, G.; Song, C.; Benjamin, S.; Schickler, W.

    2003-01-01

    The early procedures and algorithms for National digital orthophoto generation in National Digital Orthophoto Program (NDOP) were based on earlier USGS mapping operations, such as field control, aerotriangulation (derived in the early 1920's), the quarter-quadrangle-centered (3.75 minutes of longitude and latitude in geographic extent), 1:40,000 aerial photographs, and 2.5 D digital elevation models. However, large-scale city orthophotos using early procedures have disclosed many shortcomings, e.g., ghost image, occlusion, shadow. Thus, to provide the technical base (algorithms, procedure) and experience needed for city large-scale digital orthophoto creation is essential for the near future national large-scale digital orthophoto deployment and the revision of the Standards for National Large-scale City Digital Orthophoto in National Digital Orthophoto Program (NDOP). This paper will report our initial research results as follows: (1) High-precision 3D city DSM generation through LIDAR data processing, (2) Spatial objects/features extraction through surface material information and high-accuracy 3D DSM data, (3) 3D city model development, (4) Algorithm development for generation of DTM-based orthophoto, and DBM-based orthophoto, (5) True orthophoto generation by merging DBM-based orthophoto and DTM-based orthophoto, and (6) Automatic mosaic by optimizing and combining imagery from many perspectives.

  16. Developing and Understanding Methods for Large-Scale Nonlinear Optimization

    DTIC Science & Technology

    2006-07-24

    algorithms for large-scale uncon- strained and constrained optimization problems, including limited-memory methods for problems with -2- many thousands...34Published in peer-reviewed journals" E. Eskow, B. Bader, R. Byrd, S. Crivelli, T. Head-Gordon, V. Lamberti and R. Schnabel, "An optimization approach to the

  17. International Large-Scale Assessments: What Uses, What Consequences?

    ERIC Educational Resources Information Center

    Johansson, Stefan

    2016-01-01

    Background: International large-scale assessments (ILSAs) are a much-debated phenomenon in education. Increasingly, their outcomes attract considerable media attention and influence educational policies in many jurisdictions worldwide. The relevance, uses and consequences of these assessments are often the focus of research scrutiny. Whilst some…

  18. Extracting Useful Semantic Information from Large Scale Corpora of Text

    ERIC Educational Resources Information Center

    Mendoza, Ray Padilla, Jr.

    2012-01-01

    Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…

  19. Large scale structure of the sun's radio corona

    NASA Technical Reports Server (NTRS)

    Kundu, M. R.

    1986-01-01

    Results of studies of large scale structures of the corona at long radio wavelengths are presented, using data obtained with the multifrequency radioheliograph of the Clark Lake Radio Observatory. It is shown that features corresponding to coronal streamers and coronal holes are readily apparent in the Clark Lake maps.

  20. Moon-based Earth Observation for Large Scale Geoscience Phenomena

    NASA Astrophysics Data System (ADS)

    Guo, Huadong; Liu, Guang; Ding, Yixing

    2016-07-01

    The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.

  1. Large-scale screening by the automated Wassermann reaction

    PubMed Central

    Wagstaff, W.; Firth, R.; Booth, J. R.; Bowley, C. C.

    1969-01-01

    In view of the drawbacks in the use of the Kahn test for large-scale screening of blood donors, mainly those of human error through work overload and fatiguability, an attempt was made to adapt an existing automated complement-fixation technique for this purpose. This paper reports the successful results of that adaptation. PMID:5776559

  2. Large-scale societal changes and intentionality - an uneasy marriage.

    PubMed

    Bodor, Péter; Fokas, Nikos

    2014-08-01

    Our commentary focuses on juxtaposing the proposed science of intentional change with facts and concepts pertaining to the level of large populations or changes on a worldwide scale. Although we find a unified evolutionary theory promising, we think that long-term and large-scale, scientifically guided - that is, intentional - social change is not only impossible, but also undesirable.

  3. Chrysler Partners with North Lake High School in an Advanced Manufacturing Technology Program for Special Needs Students.

    ERIC Educational Resources Information Center

    Karbon, Patrick J.; Kuhn, Cynthia

    1996-01-01

    Chrysler Corporation and North Lake High School cooperated to develop and deploy Advanced Manufacturing Technology for high school students identified as at risk or hard to serve. Chrysler provided curriculum that was delivered by training center instructors; teachers ensured student competence in academic areas. (JOW)

  4. ROBOTICALLY ENHANCED ADVANCED MANUFACTURING CONCEPTS TO OPTIMIZE ENERGY, PRODUCTIVITY, AND ENVIRONMENTAL PERFORMANCE

    SciTech Connect

    Larry L. Keller; Joseph M. Pack; Robert V. Kolarik II

    2007-11-05

    In the first phase of the REML project, major assets were acquired for a manufacturing line for follow-on installation, capability studies and optimization. That activity has been documented in the DE-FC36-99ID13819 final report. In this the second phase of the REML project, most of the major assets have been installed in a manufacturing line arrangement featuring a green cell, a thermal treatment cell and a finishing cell. Most of the secondary and support assets have been acquired and installed. Assets have been integrated with a commercial, machine-tending gantry robot in the thermal treatment cell and with a low-mass, high-speed gantry robot in the finish cell. Capabilities for masterless gauging of product’s dimensional and form characteristics were advanced. Trial production runs across the entire REML line have been undertaken. Discrete event simulation modeling has aided in line balancing and reduction of flow time. Energy, productivity and cost, and environmental comparisons to baselines have been made. Energy The REML line in its current state of development has been measured to be about 22% (338,000 kVA-hrs) less energy intensive than the baseline conventional low volume line assuming equivalent annual production volume of approximately 51,000 races. The reduction in energy consumption is largely attributable to the energy reduction in the REML thermal treatment cell where the heating devices are energized on demand and are appropriately sized to the heating load of a near single piece flow line. If additional steps such as power factor correction and use of high-efficiency motors were implemented to further reduce energy consumption, it is estimated, but not yet demonstrated, that the REML line would be about 30% less energy intensive than the baseline conventional low volume line assuming equivalent annual production volume. Productivity The capital cost of an REML line would be roughly equivalent to the capital cost of a new conventional line. The

  5. Development of advanced manufacturing technologies for low cost hydrogen storage vessels

    SciTech Connect

    Leavitt, Mark; Lam, Patrick

    2014-12-29

    The U.S. Department of Energy (DOE) defined a need for low-cost gaseous hydrogen storage vessels at 700 bar to support cost goals aimed at 500,000 units per year. Existing filament winding processes produce a pressure vessel that is structurally inefficient, requiring more carbon fiber for manufacturing reasons, than would otherwise be necessary. Carbon fiber is the greatest cost driver in building a hydrogen pressure vessel. The objective of this project is to develop new methods for manufacturing Type IV pressure vessels for hydrogen storage with the purpose of lowering the overall product cost through an innovative hybrid process of optimizing composite usage by combining traditional filament winding (FW) and advanced fiber placement (AFP) techniques. A numbers of vessels were manufactured in this project. The latest vessel design passed all the critical tests on the hybrid design per European Commission (EC) 79-2009 standard except the extreme temperature cycle test. The tests passed include burst test, cycle test, accelerated stress rupture test and drop test. It was discovered the location where AFP and FW overlap for load transfer could be weakened during hydraulic cycling at 85°C. To design a vessel that passed these tests, the in-house modeling software was updated to add capability to start and stop fiber layers to simulate the AFP process. The original in-house software was developed for filament winding only. Alternative fiber was also investigated in this project, but the added mass impacted the vessel cost negatively due to the lower performance from the alternative fiber. Overall the project was a success to show the hybrid design is a viable solution to reduce fiber usage, thus driving down the cost of fuel storage vessels. Based on DOE’s baseline vessel size of 147.3L and 91kg, the 129L vessel (scaled to DOE baseline) in this project shows a 32% composite savings and 20% cost savings when comparing Vessel 15 hybrid design and the Quantum

  6. Final Report - Advanced MEA's for Enhanced Operating Conditions, Amenable to High Volume Manufacture

    SciTech Connect

    Debe, Mark K.

    2007-09-30

    This report summarizes the work completed under a 3M/DOE contract directed at advancing the key fuel cell (FC) components most critical for overcoming the polymer electrolyte membrane fuel cell (PEMFC) performance, durability & cost barriers. This contract focused on the development of advanced ion exchange membranes & electrocatalysts for PEMFCs that will enable operation under ever more demanding automotive operating conditions & the use high volume compatible processes for their manufacture. Higher performing & more durable electrocatalysts must be developed for PEMFCs to meet the power density & lifetime hours required for FC vehicles. At the same time the amount of expensive Pt catalyst must be reduced to lower the MEA costs. While these two properties are met, the catalyst must be made resistant to multiple degradation mechanisms to reach necessary operating lifetimes. In this report, we present the work focused on the development of a completely new approach to PEMFC electrocatalyts, called nanostructured thin film (NSTF) catalysts. The carbon black supports are eliminated with this new approach which eliminates the carbon corrosion issue. The thin film nature of the catalyst significantly improves its robustness against dissolution & grain growth, preserving the surface area. Also, the activity of the NSTF for oxygen reduction is improved by over 500% compared to dispersed Pt catalyts. Finally, the process for fabricating the NSTF catalysts is consistent with high volume roll-good manufacturing & extremely flexible towards the introduction of new catalyst compositions & structures. This report documents the work done to develop new multi-element NSTF catalysts with properties that exceed pure Pt, that are optimized for use with the membranes discussed below, & advance the state-of-the-art towards meeting the DOE 2010 targets for PEMFC electrocatalysts. The work completed advances the understanding of the NSTF catalyst technology, identifies new NSTF

  7. Report to the President on Ensuring American Leadership in Advanced Manufacturing

    ERIC Educational Resources Information Center

    Anderson, Alan

    2011-01-01

    The United States has long thrived as a result of its ability to manufacture goods and sell them to global markets. Manufacturing activity has supported its economic growth, leading the Nation's exports and employing millions of Americans. The manufacturing sector has also driven knowledge production and innovation in the United States, by…

  8. Software development for the evaluation of the ergonomic compatibility on the selection of advanced manufacturing technology.

    PubMed

    Maldonado-Macías, A; Reyes, R; Guillen, L; García, J

    2012-01-01

    Advanced Manufacturing Technology (AMT) is one of the most relevant resources that companies have to achieve competitiveness and best performance. The selection of AMT is a complex problem which involves significant amount of information and uncertainty when multiple aspects must be taken into consideration. Actual models for the selection of AMT are found scarce of the Human Factors and Ergonomics perspective which can lead to a more complete and reliable decision. This paper presents the development of software that enhances the application of an Ergonomic Compatibility Evaluation Model that supports decision making processes taking into consideration ergonomic attributes of designs. Ergonomic Compatibility is a construct used in this model and it is mainly based in the concept of human-artifact compatibility on human compatible systems. Also, an Axiomatic Design approach by the use of the Information Axiom was evolved under a fuzzy environment to obtain the Ergonomic Incompatibility Content. The extension of this axiom for the evaluation of ergonomic compatibility requirements was the theoretical framework of this research. An incremental methodology of four stages was used to design and develop the software that enables to compare AMT alternatives by the evaluation of Ergonomic Compatibility Attributes.

  9. Using advanced manufacturing to produce unmanned aerial vehicles: a feasibility study

    NASA Astrophysics Data System (ADS)

    Easter, Steven; Turman, Jonathan; Sheffler, David; Balazs, Michael; Rotner, Jonathan

    2013-05-01

    This paper reports on a feasibility study to explore the impact of advanced manufacturing on the production and maintenance of a 3D printed, unmanned aerial vehicle (UAV) in theatre. Specifically, this report focuses on fused deposition modeling (FDM), the selective deposition of a molten thermoplastic. FDM is already a forward deployed technology, primarily used for printing custom tools and replacement parts. The authors ask if it is feasible to expand the printers' capacity to produce aerial platforms; the reduction in logistics and labor could significantly decrease costs per unit and enable far more platform customization and specialized deployment scenarios than are available in existing aircraft. The University of Virginia and The MITRE Corporation designed and built a prototype, 3D printed UAV for use as an aerial sensor platform. This report • Discusses the printed aerial platform, summarizes the design process, and compares printing methods • Describes the benefits and limitations to selecting FDM printers as the technology both for deployment as well as UAV design • Concludes with the current state and future expectations for FDM printing technologies relevant to UAV production. Our findings suggest that although 3D printing is not yet entirely field-ready, many of its advantages can already be realized.

  10. Technology-design-manufacturing co-optimization for advanced mobile SoCs

    NASA Astrophysics Data System (ADS)

    Yang, Da; Gan, Chock; Chidambaram, P. R.; Nallapadi, Giri; Zhu, John; Song, S. C.; Xu, Jeff; Yeap, Geoffrey

    2014-03-01

    How to maintain the Moore's Law scaling beyond the 193 immersion resolution limit is the key question semiconductor industry needs to answer in the near future. Process complexity will undoubtfully increase for 14nm node and beyond, which brings both challenges and opportunities for technology development. A vertically integrated design-technologymanufacturing co-optimization flow is desired to better address the complicated issues new process changes bring. In recent years smart mobile wireless devices have been the fastest growing consumer electronics market. Advanced mobile devices such as smartphones are complex systems with the overriding objective of providing the best userexperience value by harnessing all the technology innovations. Most critical system drivers are better system performance/power efficiency, cost effectiveness, and smaller form factors, which, in turns, drive the need of system design and solution with More-than-Moore innovations. Mobile system-on-chips (SoCs) has become the leading driver for semiconductor technology definition and manufacturing. Here we highlight how the co-optimization strategy influenced architecture, device/circuit, process technology and package, in the face of growing process cost/complexity and variability as well as design rule restrictions.

  11. Advances in the Manufacture of Omega-scale Double-shell Targets

    NASA Astrophysics Data System (ADS)

    Bono, M.

    2005-10-01

    The double-shell ignition target design consists of a low-Z outer shell that absorbs hohlraum-generated x-rays, implodes, and collides with a high-Z inner shell containing DT fuel. Efforts are continuing to field scaled ignition-like double shells on the Omega laser facility over a range of inner-shell Z. Previous ignition-like double-shell implosions on Omega used a low-Z CH inner shell [1]. The current target contains a higher-Z glass inner shell of diameter 216 microns, which is supported by SiO2 aerogel inside a Br-doped CH ablator shell of diameter 550 microns. Fielding double-shell targets has historically been limited by the ability to successfully fabricate them, but several technological advances have recently been made in the manufacturing process. The inner capsule will be cast in SiO2 aerogel of density 50 mg/cc, whose outer contour will be machined concentric to the inner capsule. This piece will then be assembled between two hemispherical ablator shells that mate at a step-joint with an adhesive-filled gap of thickness 100 nm. Three-dimensional tomographs made of each target using an x-ray micro-tomography system will allow precise characterization of the targets. [1] P. Amendt et al., Phys. Rev. Lett. 94, 065004 (2005).

  12. Large-scale, heterogeneous integration of nanowire arrays for image sensor circuitry.

    PubMed

    Fan, Zhiyong; Ho, Johnny C; Jacobson, Zachery A; Razavi, Haleh; Javey, Ali

    2008-08-12

    We report large-scale integration of nanowires for heterogeneous, multifunctional circuitry that utilizes both the sensory and electronic functionalities of single crystalline nanomaterials. Highly ordered and parallel arrays of optically active CdSe nanowires and high-mobility Ge/Si nanowires are deterministically positioned on substrates, and configured as photodiodes and transistors, respectively. The nanowire sensors and electronic devices are then interfaced to enable an all-nanowire circuitry with on-chip integration, capable of detecting and amplifying an optical signal with high sensitivity and precision. Notably, the process is highly reproducible and scalable with a yield of approximately 80% functional circuits, therefore, enabling the fabrication of large arrays (i.e., 13 x 20) of nanowire photosensor circuitry with image-sensing functionality. The ability to interface nanowire sensors with integrated electronics on large scales and with high uniformity presents an important advance toward the integration of nanomaterials for sensor applications.

  13. Large-scale image-based screening and profiling of cellular phenotypes.

    PubMed

    Bougen-Zhukov, Nicola; Loh, Sheng Yang; Lee, Hwee Kuan; Loo, Lit-Hsin

    2017-02-01

    Cellular phenotypes are observable characteristics of cells resulting from the interactions of intrinsic and extrinsic chemical or biochemical factors. Image-based phenotypic screens under large numbers of basal or perturbed conditions can be used to study the influences of these factors on cellular phenotypes. Hundreds to thousands of phenotypic descriptors can also be quantified from the images of cells under each of these experimental conditions. Therefore, huge amounts of data can be generated, and the analysis of these data has become a major bottleneck in large-scale phenotypic screens. Here, we review current experimental and computational methods for large-scale image-based phenotypic screens. Our focus is on phenotypic profiling, a computational procedure for constructing quantitative and compact representations of cellular phenotypes based on the images collected in these screens. © 2016 International Society for Advancement of Cytometry.

  14. The Large Scale Synthesis of Aligned Plate Nanostructures

    NASA Astrophysics Data System (ADS)

    Zhou, Yang; Nash, Philip; Liu, Tian; Zhao, Naiqin; Zhu, Shengli

    2016-07-01

    We propose a novel technique for the large-scale synthesis of aligned-plate nanostructures that are self-assembled and self-supporting. The synthesis technique involves developing nanoscale two-phase microstructures through discontinuous precipitation followed by selective etching to remove one of the phases. The method may be applied to any alloy system in which the discontinuous precipitation transformation goes to completion. The resulting structure may have many applications in catalysis, filtering and thermal management depending on the phase selection and added functionality through chemical reaction with the retained phase. The synthesis technique is demonstrated using the discontinuous precipitation of a γ‧ phase, (Ni, Co)3Al, followed by selective dissolution of the γ matrix phase. The production of the nanostructure requires heat treatments on the order of minutes and can be performed on a large scale making this synthesis technique of great economic potential.

  15. Large-scale linear nonparallel support vector machine solver.

    PubMed

    Tian, Yingjie; Ping, Yuan

    2014-02-01

    Twin support vector machines (TWSVMs), as the representative nonparallel hyperplane classifiers, have shown the effectiveness over standard SVMs from some aspects. However, they still have some serious defects restricting their further study and real applications: (1) They have to compute and store the inverse matrices before training, it is intractable for many applications where data appear with a huge number of instances as well as features; (2) TWSVMs lost the sparseness by using a quadratic loss function making the proximal hyperplane close enough to the class itself. This paper proposes a Sparse Linear Nonparallel Support Vector Machine, termed as L1-NPSVM, to deal with large-scale data based on an efficient solver-dual coordinate descent (DCD) method. Both theoretical analysis and experiments indicate that our method is not only suitable for large scale problems, but also performs as good as TWSVMs and SVMs.

  16. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    SciTech Connect

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  17. Electron drift in a large scale solid xenon

    SciTech Connect

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor two faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.

  18. Instrumentation Development for Large Scale Hypersonic Inflatable Aerodynamic Decelerator Characterization

    NASA Technical Reports Server (NTRS)

    Swanson, Gregory T.; Cassell, Alan M.

    2011-01-01

    Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology is currently being considered for multiple atmospheric entry applications as the limitations of traditional entry vehicles have been reached. The Inflatable Re-entry Vehicle Experiment (IRVE) has successfully demonstrated this technology as a viable candidate with a 3.0 m diameter vehicle sub-orbital flight. To further this technology, large scale HIADs (6.0 8.5 m) must be developed and tested. To characterize the performance of large scale HIAD technology new instrumentation concepts must be developed to accommodate the flexible nature inflatable aeroshell. Many of the concepts that are under consideration for the HIAD FY12 subsonic wind tunnel test series are discussed below.

  19. The workshop on iterative methods for large scale nonlinear problems

    SciTech Connect

    Walker, H.F.; Pernice, M.

    1995-12-01

    The aim of the workshop was to bring together researchers working on large scale applications with numerical specialists of various kinds. Applications that were addressed included reactive flows (combustion and other chemically reacting flows, tokamak modeling), porous media flows, cardiac modeling, chemical vapor deposition, image restoration, macromolecular modeling, and population dynamics. Numerical areas included Newton iterative (truncated Newton) methods, Krylov subspace methods, domain decomposition and other preconditioning methods, large scale optimization and optimal control, and parallel implementations and software. This report offers a brief summary of workshop activities and information about the participants. Interested readers are encouraged to look into an online proceedings available at http://www.usi.utah.edu/logan.proceedings. In this, the material offered here is augmented with hypertext abstracts that include links to locations such as speakers` home pages, PostScript copies of talks and papers, cross-references to related talks, and other information about topics addresses at the workshop.

  20. GAIA: A WINDOW TO LARGE-SCALE MOTIONS

    SciTech Connect

    Nusser, Adi; Branchini, Enzo; Davis, Marc E-mail: branchin@fis.uniroma3.it

    2012-08-10

    Using redshifts as a proxy for galaxy distances, estimates of the two-dimensional (2D) transverse peculiar velocities of distant galaxies could be obtained from future measurements of proper motions. We provide the mathematical framework for analyzing 2D transverse motions and show that they offer several advantages over traditional probes of large-scale motions. They are completely independent of any intrinsic relations between galaxy properties; hence, they are essentially free of selection biases. They are free from homogeneous and inhomogeneous Malmquist biases that typically plague distance indicator catalogs. They provide additional information to traditional probes that yield line-of-sight peculiar velocities only. Further, because of their 2D nature, fundamental questions regarding vorticity of large-scale flows can be addressed. Gaia, for example, is expected to provide proper motions of at least bright galaxies with high central surface brightness, making proper motions a likely contender for traditional probes based on current and future distance indicator measurements.

  1. The Large Scale Synthesis of Aligned Plate Nanostructures

    PubMed Central

    Zhou, Yang; Nash, Philip; Liu, Tian; Zhao, Naiqin; Zhu, Shengli

    2016-01-01

    We propose a novel technique for the large-scale synthesis of aligned-plate nanostructures that are self-assembled and self-supporting. The synthesis technique involves developing nanoscale two-phase microstructures through discontinuous precipitation followed by selective etching to remove one of the phases. The method may be applied to any alloy system in which the discontinuous precipitation transformation goes to completion. The resulting structure may have many applications in catalysis, filtering and thermal management depending on the phase selection and added functionality through chemical reaction with the retained phase. The synthesis technique is demonstrated using the discontinuous precipitation of a γ′ phase, (Ni, Co)3Al, followed by selective dissolution of the γ matrix phase. The production of the nanostructure requires heat treatments on the order of minutes and can be performed on a large scale making this synthesis technique of great economic potential. PMID:27439672

  2. Long gradient mode and large-scale structure observables

    NASA Astrophysics Data System (ADS)

    Allahyari, Alireza; Firouzjaee, Javad T.

    2017-03-01

    We extend the study of long-mode perturbations to other large-scale observables such as cosmic rulers, galaxy-number counts, and halo bias. The long mode is a pure gradient mode that is still outside an observer's horizon. We insist that gradient-mode effects on observables vanish. It is also crucial that the expressions for observables are relativistic. This allows us to show that the effects of a gradient mode on the large-scale observables vanish identically in a relativistic framework. To study the potential modulation effect of the gradient mode on halo bias, we derive a consistency condition to the first order in gradient expansion. We find that the matter variance at a fixed physical scale is not modulated by the long gradient mode perturbations when the consistency condition holds. This shows that the contribution of long gradient modes to bias vanishes in this framework.

  3. Lagrangian space consistency relation for large scale structure

    SciTech Connect

    Horn, Bart; Hui, Lam; Xiao, Xiao E-mail: lh399@columbia.edu

    2015-09-01

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.

  4. Large Scale Deformation of the Western US Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2001-01-01

    Destructive earthquakes occur throughout the western US Cordillera (WUSC), not just within the San Andreas fault zone. But because we do not understand the present-day large-scale deformations of the crust throughout the WUSC, our ability to assess the potential for seismic hazards in this region remains severely limited. To address this problem, we are using a large collection of Global Positioning System (GPS) networks which spans the WUSC to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work can roughly be divided into an analysis of the GPS observations to infer the deformation field across and within the entire plate boundary zone and an investigation of the implications of this deformation field regarding plate boundary dynamics.

  5. Electron drift in a large scale solid xenon

    DOE PAGES

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor twomore » faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.« less

  6. LARGE-SCALE MOTIONS IN THE PERSEUS GALAXY CLUSTER

    SciTech Connect

    Simionescu, A.; Werner, N.; Urban, O.; Allen, S. W.; Fabian, A. C.; Sanders, J. S.; Mantz, A.; Nulsen, P. E. J.; Takei, Y.

    2012-10-01

    By combining large-scale mosaics of ROSAT PSPC, XMM-Newton, and Suzaku X-ray observations, we present evidence for large-scale motions in the intracluster medium of the nearby, X-ray bright Perseus Cluster. These motions are suggested by several alternating and interleaved X-ray bright, low-temperature, low-entropy arcs located along the east-west axis, at radii ranging from {approx}10 kpc to over a Mpc. Thermodynamic features qualitatively similar to these have previously been observed in the centers of cool-core clusters, and were successfully modeled as a consequence of the gas sloshing/swirling motions induced by minor mergers. Our observations indicate that such sloshing/swirling can extend out to larger radii than previously thought, on scales approaching the virial radius.

  7. The CLASSgal code for relativistic cosmological large scale structure

    SciTech Connect

    Dio, Enea Di; Montanari, Francesco; Durrer, Ruth; Lesgourgues, Julien E-mail: Francesco.Montanari@unige.ch E-mail: Ruth.Durrer@unige.ch

    2013-11-01

    We present accurate and efficient computations of large scale structure observables, obtained with a modified version of the CLASS code which is made publicly available. This code includes all relativistic corrections and computes both the power spectrum C{sub ℓ}(z{sub 1},z{sub 2}) and the corresponding correlation function ξ(θ,z{sub 1},z{sub 2}) of the matter density and the galaxy number fluctuations in linear perturbation theory. For Gaussian initial perturbations, these quantities contain the full information encoded in the large scale matter distribution at the level of linear perturbation theory. We illustrate the usefulness of our code for cosmological parameter estimation through a few simple examples.

  8. A Cloud Computing Platform for Large-Scale Forensic Computing

    NASA Astrophysics Data System (ADS)

    Roussev, Vassil; Wang, Liqiang; Richard, Golden; Marziale, Lodovico

    The timely processing of massive digital forensic collections demands the use of large-scale distributed computing resources and the flexibility to customize the processing performed on the collections. This paper describes MPI MapReduce (MMR), an open implementation of the MapReduce processing model that outperforms traditional forensic computing techniques. MMR provides linear scaling for CPU-intensive processing and super-linear scaling for indexing-related workloads.

  9. Large-Scale Weather Disturbances in Mars’ Southern Extratropics

    NASA Astrophysics Data System (ADS)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2015-11-01

    Between late autumn and early spring, Mars’ middle and high latitudes within its atmosphere support strong mean thermal gradients between the tropics and poles. Observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) indicate that this strong baroclinicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). These extratropical weather disturbances are key components of the global circulation. Such wave-like disturbances act as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of large-scale, traveling extratropical synoptic-period disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively lifted and radiatively active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to their northern-hemisphere counterparts, southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are examined. Simulations that adapt Mars’ full topography compared to simulations that utilize synthetic topographies emulating key large-scale features of the southern middle latitudes indicate that Mars’ transient barotropic/baroclinic eddies are highly influenced by the great impact basins of this hemisphere (e.g., Argyre and Hellas). The occurrence of a southern storm zone in late winter and early spring appears to be anchored to the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre

  10. Health-Terrain: Visualizing Large Scale Health Data

    DTIC Science & Technology

    2015-04-01

    Award Number: W81XWH-13-1-0020 TITLE: Health-Terrain: Visualizing Large Scale Health Data PRINCIPAL INVESTIGATOR: Ph.D. Fang, Shiaofen...ADDRESS. 1. REPORT DATE April 2015 2. REPORT TYPE Annual 3. DATES COVERED 7 MAR 2014 – 6 MAR 2015 4. TITLE AND SUBTITLE Health-Terrain: Visualizing ...1) creating a concept space data model, which represents a schema tailored to support diverse visualizations and provides a uniform ontology that

  11. A Holistic Management Architecture for Large-Scale Adaptive Networks

    DTIC Science & Technology

    2007-09-01

    MANAGEMENT ARCHITECTURE FOR LARGE-SCALE ADAPTIVE NETWORKS by Michael R. Clement September 2007 Thesis Advisor: Alex Bordetsky Second Reader...TECHNOLOGY MANAGEMENT from the NAVAL POSTGRADUATE SCHOOL September 2007 Author: Michael R. Clement Approved by: Dr. Alex ...achieve in life is by His will. Ad Majorem Dei Gloriam. To my parents, my family, and Caitlin: For supporting me, listening to me when I got

  12. Large-scale detection of recombination in nucleotide sequences

    NASA Astrophysics Data System (ADS)

    Chan, Cheong Xin; Beiko, Robert G.; Ragan, Mark A.

    2008-01-01

    Genetic recombination following a genetic transfer event can produce heterogeneous phylogenetic histories within sets of genes that share a common ancestral origin. Delineating recombination events will enhance our understanding in genome evolution. However, the task of detecting recombination is not trivial due to effect of more-recent evolutionary changes that can obscure such event from detection. In this paper, we demonstrate the use of a two-phase strategy for detecting recombination events on a large-scale dataset.

  13. Multimodel Design of Large Scale Systems with Multiple Decision Makers.

    DTIC Science & Technology

    1982-08-01

    virtue. 5- , Lead me from darkneu to light. - Lead me from death to eternal Life. ( Vedic Payer) p. I, MULTIMODEL DESIGN OF LARGE SCALE SYSTEMS WITH...guidance during the course of *: this research . He would also like to thank Professors W. R. Perkins, P. V. Kokotovic, T. Basar, and T. N. Trick for...thesis concludes with Chapter 7 where we summarize the results obtained, outline the main contributions, and indicate directions for future research . 7- I

  14. Turbulent amplification of large-scale magnetic fields

    NASA Technical Reports Server (NTRS)

    Montgomery, D.; Chen, H.

    1984-01-01

    Previously-introduced methods for analytically estimating the effects of small-scale turbulent fluctuations on large-scale dynamics are extended to fully three-dimensional magnetohydrodynamics. The problem becomes algebraically tractable in the presence of sufficiently large spectral gaps. The calculation generalizes 'alpha dynamo' calculations, except that the velocity fluctuations and magnetic fluctuations are treated on an independent and equal footing. Earlier expressions for the 'alpha coefficients' of turbulent magnetic field amplification are recovered as a special case.

  15. Space transportation booster engine thrust chamber technology, large scale injector

    NASA Technical Reports Server (NTRS)

    Schneider, J. A.

    1993-01-01

    The objective of the Large Scale Injector (LSI) program was to deliver a 21 inch diameter, 600,000 lbf thrust class injector to NASA/MSFC for hot fire testing. The hot fire test program would demonstrate the feasibility and integrity of the full scale injector, including combustion stability, chamber wall compatibility (thermal management), and injector performance. The 21 inch diameter injector was delivered in September of 1991.

  16. Relic vector field and CMB large scale anomalies

    SciTech Connect

    Chen, Xingang; Wang, Yi E-mail: yw366@cam.ac.uk

    2014-10-01

    We study the most general effects of relic vector fields on the inflationary background and density perturbations. Such effects are observable if the number of inflationary e-folds is close to the minimum requirement to solve the horizon problem. We show that this can potentially explain two CMB large scale anomalies: the quadrupole-octopole alignment and the quadrupole power suppression. We discuss its effect on the parity anomaly. We also provide analytical template for more detailed data comparison.

  17. The large-scale anisotropy with the PAMELA calorimeter

    NASA Astrophysics Data System (ADS)

    Karelin, A.; Adriani, O.; Barbarino, G.; Bazilevskaya, G.; Bellotti, R.; Boezio, M.; Bogomolov, E.; Bongi, M.; Bonvicini, V.; Bottai, S.; Bruno, A.; Cafagna, F.; Campana, D.; Carbone, R.; Carlson, P.; Casolino, M.; Castellini, G.; De Donato, C.; De Santis, C.; De Simone, N.; Di Felice, V.; Formato, V.; Galper, A.; Koldashov, S.; Koldobskiy, S.; Krut'kov, S.; Kvashnin, A.; Leonov, A.; Malakhov, V.; Marcelli, L.; Martucci, M.; Mayorov, A.; Menn, W.; Mergé, M.; Mikhailov, V.; Mocchiutti, E.; Monaco, A.; Mori, N.; Munini, R.; Osteria, G.; Palma, F.; Panico, B.; Papini, P.; Pearce, M.; Picozza, P.; Ricci, M.; Ricciarini, S.; Sarkar, R.; Simon, M.; Scotti, V.; Sparvoli, R.; Spillantini, P.; Stozhkov, Y.; Vacchi, A.; Vannuccini, E.; Vasilyev, G.; Voronov, S.; Yurkin, Y.; Zampa, G.; Zampa, N.

    2015-10-01

    The large-scale anisotropy (or the so-called star-diurnal wave) has been studied using the calorimeter of the space-born experiment PAMELA. The cosmic ray anisotropy has been obtained for the Southern and Northern hemispheres simultaneously in the equatorial coordinate system for the time period 2006-2014. The dipole amplitude and phase have been measured for energies 1-20 TeV n-1.

  18. Supporting large scale applications on networks of workstations

    NASA Technical Reports Server (NTRS)

    Cooper, Robert; Birman, Kenneth P.

    1989-01-01

    Distributed applications on networks of workstations are an increasingly common way to satisfy computing needs. However, existing mechanisms for distributed programming exhibit poor performance and reliability as application size increases. Extension of the ISIS distributed programming system to support large scale distributed applications by providing hierarchical process groups is discussed. Incorporation of hierarchy in the program structure and exploitation of this to limit the communication and storage required in any one component of the distributed system is examined.

  19. The Phoenix series large scale LNG pool fire experiments.

    SciTech Connect

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  20. Information Tailoring Enhancements for Large Scale Social Data

    DTIC Science & Technology

    2016-03-15

    Social Data Progress Report No. 2 Reporting Period: December 16, 2015 – March 15, 2016 Contract No. N00014-15-P-5138 Sponsored by ONR...Intelligent Automation Incorporated Progress Report No. 2 Information Tailoring Enhancements for Large-Scale Social Data Submitted in accordance with...robustness. We imporoved the (i) messaging architecture, (ii) data redundancy, and (iii) service availability of Scraawl computational framework

  1. Host Immunity via Mutable Virtualized Large-Scale Network Containers

    DTIC Science & Technology

    2016-07-25

    system for host immunity that combines virtualization , emulation, and mutable network configurations. This system is deployed on a single host, and...entire !Pv4 address space within 5 Host Immunity via Mutable Virtualized Large-Scale Network Containers 45 minutes from a single machine. Second, when...URL, and we call it URL marker. A URL marker records the information about its parent web page’s URL and the user ID who collects the URL. Thus, when

  2. Concurrent Programming Using Actors: Exploiting Large-Scale Parallelism,

    DTIC Science & Technology

    1985-10-07

    ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK* Artificial Inteligence Laboratory AREA Is WORK UNIT NUMBERS 545 Technology Square...D-R162 422 CONCURRENT PROGRMMIZNG USING f"OS XL?ITP TEH l’ LARGE-SCALE PARALLELISH(U) NASI AC E Al CAMBRIDGE ARTIFICIAL INTELLIGENCE L. G AGHA ET AL...RESOLUTION TEST CHART N~ATIONAL BUREAU OF STANDA.RDS - -96 A -E. __ _ __ __’ .,*- - -- •. - MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL

  3. Developing and Understanding Methods for Large Scale Nonlinear Optimization

    DTIC Science & Technology

    2001-12-01

    development of new algorithms for large-scale uncon- strained and constrained optimization problems, including limited-memory methods for problems with...analysis of tensor and SQP methods for singular con- strained optimization", to appear in SIAM Journal on Optimization. Published in peer-reviewed...Mathematica, Vol III, Journal der Deutschen Mathematiker-Vereinigung, 1998. S. Crivelli, B. Bader, R. Byrd, E. Eskow, V. Lamberti , R.Schnabel and T

  4. Large-scale Alfvén vortices

    SciTech Connect

    Onishchenko, O. G.; Horton, W.; Scullion, E.; Fedun, V.

    2015-12-15

    The new type of large-scale vortex structures of dispersionless Alfvén waves in collisionless plasma is investigated. It is shown that Alfvén waves can propagate in the form of Alfvén vortices of finite characteristic radius and characterised by magnetic flux ropes carrying orbital angular momentum. The structure of the toroidal and radial velocity, fluid and magnetic field vorticity, the longitudinal electric current in the plane orthogonal to the external magnetic field are discussed.

  5. Analysis plan for 1985 large-scale tests. Technical report

    SciTech Connect

    McMullan, F.W.

    1983-01-01

    The purpose of this effort is to assist DNA in planning for large-scale (upwards of 5000 tons) detonations of conventional explosives in the 1985 and beyond time frame. Primary research objectives were to investigate potential means to increase blast duration and peak pressures. This report identifies and analyzes several candidate explosives. It examines several charge designs and identifies advantages and disadvantages of each. Other factors including terrain and multiburst techniques are addressed as are test site considerations.

  6. Equivalent common path method in large-scale laser comparator

    NASA Astrophysics Data System (ADS)

    He, Mingzhao; Li, Jianshuang; Miao, Dongjing

    2015-02-01

    Large-scale laser comparator is main standard device that providing accurate, reliable and traceable measurements for high precision large-scale line and 3D measurement instruments. It mainly composed of guide rail, motion control system, environmental parameters monitoring system and displacement measurement system. In the laser comparator, the main error sources are temperature distribution, straightness of guide rail and pitch and yaw of measuring carriage. To minimize the measurement uncertainty, an equivalent common optical path scheme is proposed and implemented. Three laser interferometers are adjusted to parallel with the guide rail. The displacement in an arbitrary virtual optical path is calculated using three displacements without the knowledge of carriage orientations at start and end positions. The orientation of air floating carriage is calculated with displacements of three optical path and position of three retroreflectors which are precisely measured by Laser Tracker. A 4th laser interferometer is used in the virtual optical path as reference to verify this compensation method. This paper analyzes the effect of rail straightness on the displacement measurement. The proposed method, through experimental verification, can improve the measurement uncertainty of large-scale laser comparator.

  7. Dispersal Mutualism Incorporated into Large-Scale, Infrequent Disturbances

    PubMed Central

    Parker, V. Thomas

    2015-01-01

    Because of their influence on succession and other community interactions, large-scale, infrequent natural disturbances also should play a major role in mutualistic interactions. Using field data and experiments, I test whether mutualisms have been incorporated into large-scale wildfire by whether the outcomes of a mutualism depend on disturbance. In this study a seed dispersal mutualism is shown to depend on infrequent, large-scale disturbances. A dominant shrubland plant (Arctostaphylos species) produces seeds that make up a persistent soil seed bank and requires fire to germinate. In post-fire stands, I show that seedlings emerging from rodent caches dominate sites experiencing higher fire intensity. Field experiments show that rodents (Perimyscus californicus, P. boylii) do cache Arctostaphylos fruit and bury most seed caches to a sufficient depth to survive a killing heat pulse that a fire might drive into the soil. While the rodent dispersal and caching behavior itself has not changed compared to other habitats, the environmental transformation caused by wildfire converts the caching burial of seed from a dispersal process to a plant fire adaptive trait, and provides the context for stimulating subsequent life history evolution in the plant host. PMID:26151560

  8. Impact of Large-scale Geological Architectures On Recharge

    NASA Astrophysics Data System (ADS)

    Troldborg, L.; Refsgaard, J. C.; Engesgaard, P.; Jensen, K. H.

    Geological and hydrogeological data constitutes the basis for assessment of ground- water flow pattern and recharge zones. The accessibility and applicability of hard ge- ological data is often a major obstacle in deriving plausible conceptual models. Nev- ertheless focus is often on parameter uncertainty caused by the effect of geological heterogeneity due to lack of hard geological data, thus neglecting the possibility of alternative conceptualizations of the large-scale geological architecture. For a catchment in the eastern part of Denmark we have constructed different geologi- cal models based on different conceptualization of the major geological trends and fa- cies architecture. The geological models are equally plausible in a conceptually sense and they are all calibrated to well head and river flow measurements. Comparison of differences in recharge zones and subsequently well protection zones emphasize the importance of assessing large-scale geological architecture in hydrological modeling on regional scale in a non-deterministic way. Geostatistical modeling carried out in a transitional probability framework shows the possibility of assessing multiple re- alizations of large-scale geological architecture from a combination of soft and hard geological information.

  9. Line segment extraction for large scale unorganized point clouds

    NASA Astrophysics Data System (ADS)

    Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan

    2015-04-01

    Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.

  10. Reliability assessment for components of large scale photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Ahadi, Amir; Ghadimi, Noradin; Mirabbasi, Davar

    2014-10-01

    Photovoltaic (PV) systems have significantly shifted from independent power generation systems to a large-scale grid-connected generation systems in recent years. The power output of PV systems is affected by the reliability of various components in the system. This study proposes an analytical approach to evaluate the reliability of large-scale, grid-connected PV systems. The fault tree method with an exponential probability distribution function is used to analyze the components of large-scale PV systems. The system is considered in the various sequential and parallel fault combinations in order to find all realistic ways in which the top or undesired events can occur. Additionally, it can identify areas that the planned maintenance should focus on. By monitoring the critical components of a PV system, it is possible not only to improve the reliability of the system, but also to optimize the maintenance costs. The latter is achieved by informing the operators about the system component's status. This approach can be used to ensure secure operation of the system by its flexibility in monitoring system applications. The implementation demonstrates that the proposed method is effective and efficient and can conveniently incorporate more system maintenance plans and diagnostic strategies.

  11. Learning Short Binary Codes for Large-scale Image Retrieval.

    PubMed

    Liu, Li; Yu, Mengyang; Shao, Ling

    2017-03-01

    Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.

  12. Homogenization of Large-Scale Movement Models in Ecology

    USGS Publications Warehouse

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  13. Channel capacity of next generation large scale MIMO systems

    NASA Astrophysics Data System (ADS)

    Alshammari, A.; Albdran, S.; Matin, M.

    2016-09-01

    Information rate that can be transferred over a given bandwidth is limited by the information theory. Capacity depends on many factors such as the signal to noise ratio (SNR), channel state information (CSI) and the spatial correlation in the propagation environment. It is very important to increase spectral efficiency in order to meet the growing demand for wireless services. Thus, Multiple input multiple output (MIMO) technology has been developed and applied in most of the wireless standards and it has been very successful in increasing capacity and reliability. As the demand is still increasing, attention now is shifting towards large scale multiple input multiple output (MIMO) which has a potential of bringing orders of magnitude of improvement in spectral and energy efficiency. It has been shown that users channels decorrelate after increasing the number of antennas. As a result, inter-user interference can be avoided since energy can be focused on precise directions. This paper investigates the limits of channel capacity for large scale MIMO. We study the relation between spectral efficiency and the number of antenna N. We use time division duplex (TDD) system in order to obtain CSI using training sequence in the uplink. The same CSI is used for the downlink because the channel is reciprocal. Spectral efficiency is measured for channel model that account for small scale fading while ignoring the effect of large scale fading. It is shown the spectral efficiency can be improved significantly when compared to single antenna systems in ideal circumstances.

  14. Sparse approximation through boosting for learning large scale kernel machines.

    PubMed

    Sun, Ping; Yao, Xin

    2010-06-01

    Recently, sparse approximation has become a preferred method for learning large scale kernel machines. This technique attempts to represent the solution with only a subset of original data points also known as basis vectors, which are usually chosen one by one with a forward selection procedure based on some selection criteria. The computational complexity of several resultant algorithms scales as O(NM(2)) in time and O(NM) in memory, where N is the number of training points and M is the number of basis vectors as well as the steps of forward selection. For some large scale data sets, to obtain a better solution, we are sometimes required to include more basis vectors, which means that M is not trivial in this situation. However, the limited computational resource (e.g., memory) prevents us from including too many vectors. To handle this dilemma, we propose to add an ensemble of basis vectors instead of only one at each forward step. The proposed method, closely related to gradient boosting, could decrease the required number M of forward steps significantly and thus a large fraction of computational cost is saved. Numerical experiments on three large scale regression tasks and a classification problem demonstrate the effectiveness of the proposed approach.

  15. Alteration of Large-Scale Chromatin Structure by Estrogen Receptor

    PubMed Central

    Nye, Anne C.; Rajendran, Ramji R.; Stenoien, David L.; Mancini, Michael A.; Katzenellenbogen, Benita S.; Belmont, Andrew S.

    2002-01-01

    The estrogen receptor (ER), a member of the nuclear hormone receptor superfamily important in human physiology and disease, recruits coactivators which modify local chromatin structure. Here we describe effects of ER on large-scale chromatin structure as visualized in live cells. We targeted ER to gene-amplified chromosome arms containing large numbers of lac operator sites either directly, through a lac repressor-ER fusion protein (lac rep-ER), or indirectly, by fusing lac repressor with the ER interaction domain of the coactivator steroid receptor coactivator 1. Significant decondensation of large-scale chromatin structure, comparable to that produced by the ∼150-fold-stronger viral protein 16 (VP16) transcriptional activator, was produced by ER in the absence of estradiol using both approaches. Addition of estradiol induced a partial reversal of this unfolding by green fluorescent protein-lac rep-ER but not by wild-type ER recruited by a lac repressor-SRC570-780 fusion protein. The chromatin decondensation activity did not require transcriptional activation by ER nor did it require ligand-induced coactivator interactions, and unfolding did not correlate with histone hyperacetylation. Ligand-induced coactivator interactions with helix 12 of ER were necessary for the partial refolding of chromatin in response to estradiol using the lac rep-ER tethering system. This work demonstrates that when tethered or recruited to DNA, ER possesses a novel large-scale chromatin unfolding activity. PMID:11971975

  16. Geospatial Optimization of Siting Large-Scale Solar Projects

    SciTech Connect

    Macknick, J.; Quinby, T.; Caulfield, E.; Gerritsen, M.; Diffendorfer, J.; Haines, S.

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  17. Assessing salivary cortisol in large-scale, epidemiological research.

    PubMed

    Adam, Emma K; Kumari, Meena

    2009-11-01

    Salivary cortisol measures are increasingly being incorporated into large-scale, population-based, or epidemiological research, in which participants are selected to be representative of particular communities or populations of interest, and sample sizes are in the order of hundreds to tens of thousands of participants. These approaches to studying salivary cortisol provide important advantages but pose a set of challenges. The representative nature of sampling, and large samples sizes associated with population-based research offer high generalizability and power, and the ability to examine cortisol functioning in relation to: (a) a wide range of social environments; (b) a diverse array individuals and groups; and (c) a broad set of pre-disease and disease outcomes. The greater importance of high response rates (to maintain generalizability) and higher costs associated with this type of large-scale research, however, requires special adaptations of existing ambulatory cortisol protocols. These include: using the most efficient sample collection protocol possible that still adequately address the specific cortisol-related questions at hand, and ensuring the highest possible response and compliance rates among those individuals invited to participate. Examples of choices made, response rates obtained, and examples of results obtained from existing epidemiological cortisol studies are offered, as are suggestions for the modeling and interpretation of salivary cortisol data obtained in large-scale epidemiological research.

  18. Large-scale investigation of genomic markers for severe periodontitis.

    PubMed

    Suzuki, Asami; Ji, Guijin; Numabe, Yukihiro; Ishii, Keisuke; Muramatsu, Masaaki; Kamoi, Kyuichi

    2004-09-01

    The purpose of the present study was to investigate the genomic markers for periodontitis, using large-scale single-nucleotide polymorphism (SNP) association studies comparing healthy volunteers and patients with periodontitis. Genomic DNA was obtained from 19 healthy volunteers and 22 patients with severe periodontitis, all of whom were Japanese. The subjects were genotyped at 637 SNPs in 244 genes on a large scale, using the TaqMan polymerase chain reaction (PCR) system. Statistically significant differences in allele and genotype frequencies were analyzed with Fisher's exact test. We found statistically significant differences (P < 0.01) between the healthy volunteers and patients with severe periodontitis in the following genes; gonadotropin-releasing hormone 1 (GNRH1), phosphatidylinositol 3-kinase regulatory 1 (PIK3R1), dipeptidylpeptidase 4 (DPP4), fibrinogen-like 2 (FGL2), and calcitonin receptor (CALCR). These results suggest that SNPs in the GNRH1, PIK3R1, DPP4, FGL2, and CALCR genes are genomic markers for severe periodontitis. Our findings indicate the necessity of analyzing SNPs in genes on a large scale (i.e., genome-wide approach), to identify genomic markers for periodontitis.

  19. Large-scale biodiversity patterns in freshwater phytoplankton.

    PubMed

    Stomp, Maayke; Huisman, Jef; Mittelbach, Gary G; Litchman, Elena; Klausmeier, Christopher A

    2011-11-01

    Our planet shows striking gradients in the species richness of plants and animals, from high biodiversity in the tropics to low biodiversity in polar and high-mountain regions. Recently, similar patterns have been described for some groups of microorganisms, but the large-scale biogeographical distribution of freshwater phytoplankton diversity is still largely unknown. We examined the species diversity of freshwater phytoplankton sampled from 540 lakes and reservoirs distributed across the continental United States and found strong latitudinal, longitudinal, and altitudinal gradients in phytoplankton biodiversity, demonstrating that microorganisms can show substantial geographic variation in biodiversity. Detailed analysis using structural equation models indicated that these large-scale biodiversity gradients in freshwater phytoplankton diversity were mainly driven by local environmental factors, although there were residual direct effects of latitude, longitude, and altitude as well. Specifically, we found that phytoplankton species richness was an increasing saturating function of lake chlorophyll a concentration, increased with lake surface area and possibly increased with water temperature, resembling effects of productivity, habitat area, and temperature on diversity patterns commonly observed for macroorganisms. In turn, these local environmental factors varied along latitudinal, longitudinal, and altitudinal gradients. These results imply that changes in land use or climate that affect these local environmental factors are likely to have major impacts on large-scale biodiversity patterns of freshwater phytoplankton.

  20. A model of plasma heating by large-scale flow

    NASA Astrophysics Data System (ADS)

    Pongkitiwanichakul, P.; Cattaneo, F.; Boldyrev, S.; Mason, J.; Perez, J. C.

    2015-12-01

    In this work, we study the process of energy dissipation triggered by a slow large-scale motion of a magnetized conducting fluid. Our consideration is motivated by the problem of heating the solar corona, which is believed to be governed by fast reconnection events set off by the slow motion of magnetic field lines anchored in the photospheric plasma. To elucidate the physics governing the disruption of the imposed laminar motion and the energy transfer to small scales, we propose a simplified model where the large-scale motion of magnetic field lines is prescribed not at the footpoints but rather imposed volumetrically. As a result, the problem can be treated numerically with an efficient, highly accurate spectral method, allowing us to use a resolution and statistical ensemble exceeding those of the previous work. We find that, even though the large-scale deformations are slow, they eventually lead to reconnection events that drive a turbulent state at smaller scales. The small-scale turbulence displays many of the universal features of field-guided magnetohydrodynamic turbulence like a well-developed inertial range spectrum. Based on these observations, we construct a phenomenological model that gives the scalings of the amplitude of the fluctuations and the energy-dissipation rate as functions of the input parameters. We find good agreement between the numerical results and the predictions of the model.

  1. Large-scale flow generation by inhomogeneous helicity.

    PubMed

    Yokoi, N; Brandenburg, A

    2016-03-01

    The effect of kinetic helicity (velocity-vorticity correlation) on turbulent momentum transport is investigated. The turbulent kinetic helicity (pseudoscalar) enters the Reynolds stress (mirror-symmetric tensor) expression in the form of a helicity gradient as the coupling coefficient for the mean vorticity and/or the angular velocity (axial vector), which suggests the possibility of mean-flow generation in the presence of inhomogeneous helicity. This inhomogeneous helicity effect, which was previously confirmed at the level of a turbulence- or closure-model simulation, is examined with the aid of direct numerical simulations of rotating turbulence with nonuniform helicity sustained by an external forcing. The numerical simulations show that the spatial distribution of the Reynolds stress is in agreement with the helicity-related term coupled with the angular velocity, and that a large-scale flow is generated in the direction of angular velocity. Such a large-scale flow is not induced in the case of homogeneous turbulent helicity. This result confirms the validity of the inhomogeneous helicity effect in large-scale flow generation and suggests that a vortex dynamo is possible even in incompressible turbulence where there is no baroclinicity effect.

  2. Large-scale flow experiments for managing river systems

    USGS Publications Warehouse

    Konrad, Christopher P.; Olden, Julian D.; Lytle, David A.; Melis, Theodore S.; Schmidt, John C.; Bray, Erin N.; Freeman, Mary C.; Gido, Keith B.; Hemphill, Nina P.; Kennard, Mark J.; McMullen, Laura E.; Mims, Meryl C.; Pyron, Mark; Robinson, Christopher T.; Williams, John G.

    2011-01-01

    Experimental manipulations of streamflow have been used globally in recent decades to mitigate the impacts of dam operations on river systems. Rivers are challenging subjects for experimentation, because they are open systems that cannot be isolated from their social context. We identify principles to address the challenges of conducting effective large-scale flow experiments. Flow experiments have both scientific and social value when they help to resolve specific questions about the ecological action of flow with a clear nexus to water policies and decisions. Water managers must integrate new information into operating policies for large-scale experiments to be effective. Modeling and monitoring can be integrated with experiments to analyze long-term ecological responses. Experimental design should include spatially extensive observations and well-defined, repeated treatments. Large-scale flow manipulations are only a part of dam operations that affect river systems. Scientists can ensure that experimental manipulations continue to be a valuable approach for the scientifically based management of river systems.

  3. Manufacturing technologies

    NASA Astrophysics Data System (ADS)

    The Manufacturing Technologies Center is at the core of Sandia National Laboratories' advanced manufacturing effort which spans the entire product realization process. The center's capabilities in product and process development are summarized in the following disciplines: (1) mechanical - rapid prototyping, manufacturing engineering, machining and computer-aided manufacturing, measurement and calibration, and mechanical and electronic manufacturing liaison; (2) electronics - advanced packaging for microelectronics, printed circuits, and electronic fabrication; and (3) materials - ceramics, glass, thin films, vacuum technology, brazing, polymers, adhesives, composite materials, and process analysis.

  4. New probes of Cosmic Microwave Background large-scale anomalies

    NASA Astrophysics Data System (ADS)

    Aiola, Simone

    Fifty years of Cosmic Microwave Background (CMB) data played a crucial role in constraining the parameters of the LambdaCDM model, where Dark Energy, Dark Matter, and Inflation are the three most important pillars not yet understood. Inflation prescribes an isotropic universe on large scales, and it generates spatially-correlated density fluctuations over the whole Hubble volume. CMB temperature fluctuations on scales bigger than a degree in the sky, affected by modes on super-horizon scale at the time of recombination, are a clean snapshot of the universe after inflation. In addition, the accelerated expansion of the universe, driven by Dark Energy, leaves a hardly detectable imprint in the large-scale temperature sky at late times. Such fundamental predictions have been tested with current CMB data and found to be in tension with what we expect from our simple LambdaCDM model. Is this tension just a random fluke or a fundamental issue with the present model? In this thesis, we present a new framework to probe the lack of large-scale correlations in the temperature sky using CMB polarization data. Our analysis shows that if a suppression in the CMB polarization correlations is detected, it will provide compelling evidence for new physics on super-horizon scale. To further analyze the statistical properties of the CMB temperature sky, we constrain the degree of statistical anisotropy of the CMB in the context of the observed large-scale dipole power asymmetry. We find evidence for a scale-dependent dipolar modulation at 2.5sigma. To isolate late-time signals from the primordial ones, we test the anomalously high Integrated Sachs-Wolfe effect signal generated by superstructures in the universe. We find that the detected signal is in tension with the expectations from LambdaCDM at the 2.5sigma level, which is somewhat smaller than what has been previously argued. To conclude, we describe the current status of CMB observations on small scales, highlighting the

  5. Robust large-scale parallel nonlinear solvers for simulations.

    SciTech Connect

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write

  6. Variable responses to large-scale climate change in European Parus populations.

    PubMed Central

    Visser, Marcel E; Adriaensen, Frank; Van Balen, Johan H; Blondel, Jacques; Dhondt, André A; Van Dongen, Stefan; Du Feu, Chris; Ivankina, Elena V; Kerimov, Anvar B; De Laet, Jenny; Matthysen, Erik; McCleery, Robin; Orell, Markku; Thomson, David L

    2003-01-01

    Spring temperatures in temperate regions have increased over the past 20 years and many organisms have responded to this increase by advancing the timing of their growth and reproduction. However, not all populations show an advancement of phenology. Understanding why some populations advance and others do not will give us insight into the possible constraints and selection pressures on the advancement of phenology. By combining two decades of data on 24 populations of tits (Parus sp.) from six European countries, we show that the phenological response to large-scale changes in spring temperature varies across a species' range, even between populations situated close to each other. We show that this variation cannot be fully explained by variation in the temperature change during the pre- and post-laying periods, as recently suggested. Instead, we find evidence for a link between rising temperatures and the frequency of second broods, which results in complex shifts in the laying dates of first clutches. Our results emphasize the need to consider links between different life-history parameters in order to predict the ecological consequences of large-scale climate changes. PMID:12639315

  7. 75 FR 51843 - In the Matter of Certain Large Scale Integrated Circuit Semiconductor Chips and Products...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-23

    ... Matter of Certain Large Scale Integrated Circuit Semiconductor Chips and Products Containing the Same... certain large scale integrated circuit semiconductor chips and products containing same by reason...

  8. Advanced treatment of wet-spun acrylic fiber manufacturing wastewater using three-dimensional electrochemical oxidation.

    PubMed

    Zheng, Tianlong; Wang, Qunhui; Shi, Zhining; Fang, Yue; Shi, Shanshan; Wang, Juan; Wu, Chuanfu

    2016-12-01

    A three-dimensional electrochemical oxidation (3D-EC) reactor with introduction of activated carbon (AC) as particle micro-electrodes was applied for the advanced treatment of secondary wastewater effluent of a wet-spun acrylic fiber manufacturing plant. Under the optimized conditions (current density of 500A/m(2), circulation rate of 5mL/min, AC dosage of 50g, and chloride concentration of 1.0g/L), the average removal efficiencies of chemical oxygen demand (CODcr), NH3-N, total organic carbon (TOC), and ultraviolet absorption at 254nm (UV254) of the 3D-EC reactor were 64.5%, 60.8%, 46.4%, and 64.8%, respectively; while the corresponding effluent concentrations of CODcr, NH3-N, TOC, and UV254 were 76.6, 20.1, and 42.5mg/L, and 0.08Abs/cm, respectively. The effluent concentration of CODcr was less than 100mg/L, which showed that the treated wastewater satisfied the demand of the integrated wastewater discharge standard (GB 8978-1996). The 3D-EC process remarkably improved the treatment efficiencies with synergistic effects for CODcr, NH3-N, TOC, and UV254 during the stable stage of 44.5%, 38.8%, 27.2%, and 10.9%, respectively, as compared with the sum of the efficiencies of a two-dimensional electrochemical oxidation (2D-EC) reactor and an AC adsorption process, which was ascribed to the numerous micro-electrodes of AC in the 3D-EC reactor. Gas chromatography mass spectrometry (GC-MS) analysis revealed that electrochemical treatment did not generate more toxic organics, and it was proved that the increase in acute biotoxicity was caused primarily by the production of free chlorine.

  9. An Assessment of Critical Dimension Small Angle X-ray Scattering Metrology for Advanced Semiconductor Manufacturing

    SciTech Connect

    Settens, Charles M.

    2015-01-01

    Simultaneous migration of planar transistors to FinFET architectures, the introduction of a plurality of materials to ensure suitable electrical characteristics, and the establishment of reliable multiple patterning lithography schemes to pattern sub-10 nm feature sizes imposes formidable challenges to current in-line dimensional metrologies. Because the shape of a FinFET channel cross-section immediately influences the electrical characteristics, the evaluation of 3D device structures requires measurement of parameters beyond traditional critical dimension (CD), including their sidewall angles, top corner rounding and footing, roughness, recesses and undercuts at single nanometer dimensions; thus, metrologies require sub-nm and approaching atomic level measurement uncertainty. Synchrotron critical dimension small angle X-ray scattering (CD-SAXS) has unique capabilities to non-destructively monitor the cross-section shape of surface structures with single nanometer uncertainty and can perform overlay metrology to sub-nm uncertainty. In this dissertation, we perform a systematic experimental investigation using CD-SAXS metrology on a hierarchy of semiconductor 3D device architectures including, high-aspect-ratio contact holes, H2 annealed Si fins, and a series of grating type samples at multiple points along a FinFET fabrication process increasing in structural intricacy and ending with fully fabricated FinFET. Comparative studies between CD-SAXS metrology and other relevant semiconductor dimensional metrologies, particularly CDSEM, CD-AFM and TEM are used to determine physical limits of CD-SAXS approach for advanced semiconductor samples. CD-SAXS experimental tradeoffs, advice for model-dependent analysis and thoughts on the compatibility with a semiconductor manufacturing environment are discussed.

  10. The BAHAMAS project: calibrated hydrodynamical simulations for large-scale structure cosmology

    NASA Astrophysics Data System (ADS)

    McCarthy, Ian G.; Schaye, Joop; Bird, Simeon; Le Brun, Amandine M. C.

    2017-03-01

    The evolution of the large-scale distribution of matter is sensitive to a variety of fundamental parameters that characterize the dark matter, dark energy, and other aspects of our cosmological framework. Since the majority of the mass density is in the form of dark matter that cannot be directly observed, to do cosmology with large-scale structure, one must use observable (baryonic) quantities that trace the underlying matter distribution in a (hopefully) predictable way. However, recent numerical studies have demonstrated that the mapping between observable and total mass, as well as the total mass itself, are sensitive to unresolved feedback processes associated with galaxy formation, motivating explicit calibration of the feedback efficiencies. Here, we construct a new suite of large-volume cosmological hydrodynamical simulations (called BAHAMAS, for BAryons and HAloes of MAssive Systems), where subgrid models of stellar and active galactic nucleus feedback have been calibrated to reproduce the present-day galaxy stellar mass function and the hot gas mass fractions of groups and clusters in order to ensure the effects of feedback on the overall matter distribution are broadly correct. We show that the calibrated simulations reproduce an unprecedentedly wide range of properties of massive systems, including the various observed mappings between galaxies, hot gas, total mass, and black holes, and represent a significant advance in our ability to mitigate the primary systematic uncertainty in most present large-scale structure tests.

  11. IMPROVEMENT OF WEAR COMPONENT'S PERFORMANCE BY UTILIZING ADVANCED MATERIALS AND NEW MANUFACTURING TECHNOLOGIES: CASTCON PROCESS FOR MINING APPLICATIONS

    SciTech Connect

    Xiaodi Huang; Richard Gertsch

    2005-02-04

    Michigan Technological University, together with The Robbins Group, Advanced Ceramic Research, Advanced Ceramic Manufacturing, and Superior Rock Bits, evaluated a new process and a new material for producing drill bit inserts and disc cutters for the mining industry. Difficulties in the material preparation stage slowed the research initially. Prototype testing of the drill bit inserts showed that the new inserts did not perform up to the current state of the art. Due to difficulties in the prototype production of the disc cutters, the disc cutter was manufactured but not tested. Although much promising information was obtained as a result of this project, the objective of developing an effective means for producing rock drill bits and rock disc cutters that last longer, increase energy efficiency and penetration rate, and lower overall production cost was not met.

  12. V1.6 Development of Advanced Manufacturing Technologies for Low Cost Hydrogen Storage Vessels

    SciTech Connect

    Leavitt, Mark; Lam, Patrick; Nelson, Karl M.; johnson, Brice A.; Johnson, Kenneth I.; Alvine, Kyle J.; Ruiz, Antonio; Adams, Jesse

    2012-10-01

    The goal of this project is to develop an innovative manufacturing process for Type IV high-pressure hydrogen storage vessels, with the intent to significantly lower manufacturing costs. Part of the development is to integrate the features of high precision AFP and commercial FW. Evaluation of an alternative fiber to replace a portion of the baseline fiber will help to reduce costs further.

  13. Can International Large-Scale Assessments Inform a Global Learning Goal? Insights from the Learning Metrics Task Force

    ERIC Educational Resources Information Center

    Winthrop, Rebecca; Simons, Kate Anderson

    2013-01-01

    In recent years, the global community has developed a range of initiatives to inform the post-2015 global development agenda. In the education community, International Large-Scale Assessments (ILSAs) have an important role to play in advancing a global shift in focus to access plus learning. However, there are a number of other assessment tools…

  14. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a

  15. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect

    Ghattas, Omar

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.

  16. A Survey on Routing Protocols for Large-Scale Wireless Sensor Networks

    PubMed Central

    Li, Changle; Zhang, Hanxiao; Hao, Binbin; Li, Jiandong

    2011-01-01

    With the advances in micro-electronics, wireless sensor devices have been made much smaller and more integrated, and large-scale wireless sensor networks (WSNs) based the cooperation among the significant amount of nodes have become a hot topic. “Large-scale” means mainly large area or high density of a network. Accordingly the routing protocols must scale well to the network scope extension and node density increases. A sensor node is normally energy-limited and cannot be recharged, and thus its energy consumption has a quite significant effect on the scalability of the protocol. To the best of our knowledge, currently the mainstream methods to solve the energy problem in large-scale WSNs are the hierarchical routing protocols. In a hierarchical routing protocol, all the nodes are divided into several groups with different assignment levels. The nodes within the high level are responsible for data aggregation and management work, and the low level nodes for sensing their surroundings and collecting information. The hierarchical routing protocols are proved to be more energy-efficient than flat ones in which all the nodes play the same role, especially in terms of the data aggregation and the flooding of the control packets. With focus on the hierarchical structure, in this paper we provide an insight into routing protocols designed specifically for large-scale WSNs. According to the different objectives, the protocols are generally classified based on different criteria such as control overhead reduction, energy consumption mitigation and energy balance. In order to gain a comprehensive understanding of each protocol, we highlight their innovative ideas, describe the underlying principles in detail and analyze their advantages and disadvantages. Moreover a comparison of each routing protocol is conducted to demonstrate the differences between the protocols in terms of message complexity, memory requirements, localization, data aggregation, clustering manner

  17. Computer aided fast turnaround laboratory for research in VLSI (Very Large Scale Integration)

    NASA Astrophysics Data System (ADS)

    Meindl, James D.; Shott, John

    1987-05-01

    The principal objectives of the computer aided/Automated fast turn-around laboratory (CAFTAL) for VLSI are: application of cutting edge computer science and software systems engineering to fast turn-around fabrication in order to develop more productive and flexible new approaches; fast turn-around fabrication of optimized VLSI systems achieved through synergistic integration of system research and device research in aggressive applications such as superfast computers, and investigation of physical limits on submicron VLSI in order to define and explore the most promising technologies. To make a state-of-the-art integrated circuit process more manufacturable, we must be able to understand both the numerous individual process technologies used to fabricate the complete device as well as the important device, circuit and system limitations in sufficient detail to monitor and control the overall fabrication sequence. Specifically, we must understand the sensitivity of device, circuit and system performance to each important step in the fabrication sequence. Moreover, we should be able to predict the manufacturability of an integrated circuit before we actually manufacture it. The salient objective of this program is to enable accurate simulation and control of computer-integrated manufacturing of ultra large scale integrated (ULSI) systems, including millions of submicron transistors in a single silicon chip.

  18. Applications of Data Assimilation to Analysis of the Ocean on Large Scales

    NASA Technical Reports Server (NTRS)

    Miller, Robert N.; Busalacchi, Antonio J.; Hackert, Eric C.

    1997-01-01

    It is commonplace to begin talks on this topic by noting that oceanographic data are too scarce and sparse to provide complete initial and boundary conditions for large-scale ocean models. Even considering the availability of remotely-sensed data such as radar altimetry from the TOPEX and ERS-1 satellites, a glance at a map of available subsurface data should convince most observers that this is still the case. Data are still too sparse for comprehensive treatment of interannual to interdecadal climate change through the use of models, since the new data sets have not been around for very long. In view of the dearth of data, we must note that the overall picture is changing rapidly. Recently, there have been a number of large scale ocean analysis and prediction efforts, some of which now run on an operational or at least quasi-operational basis, most notably the model based analyses of the tropical oceans. These programs are modeled on numerical weather prediction. Aside from the success of the global tide models, assimilation of data in the tropics, in support of prediction and analysis of seasonal to interannual climate change, is probably the area of large scale ocean modeling and data assimilation in which the most progress has been made. Climate change is a problem which is particularly suited to advanced data assimilation methods. Linear models are useful, and the linear theory can be exploited. For the most part, the data are sufficiently sparse that implementation of advanced methods is worthwhile. As an example of a large scale data assimilation experiment with a recent extensive data set, we present results of a tropical ocean experiment in which the Kalman filter was used to assimilate three years of altimetric data from Geosat into a coarsely resolved linearized long wave shallow water model. Since nonlinear processes dominate the local dynamic signal outside the tropics, subsurface dynamical quantities cannot be reliably inferred from surface height

  19. Large-Scale Hybrid Motor Testing. Chapter 10

    NASA Technical Reports Server (NTRS)

    Story, George

    2006-01-01

    Hybrid rocket motors can be successfully demonstrated at a small scale virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory scale motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for large scale applications are - how do they scale and has it been shown in a large motor? To answer those questions, large scale motor testing is required to verify the hybrid motor at its true size. The necessity to conduct large-scale hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small scale hybrid data to that of larger scale data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the large, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and scaling concepts that went into the development of those large motors.

  20. Solving large scale structure in ten easy steps with COLA

    SciTech Connect

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J. E-mail: matiasz@ias.edu

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  1. Infectious diseases in large-scale cat hoarding investigations.

    PubMed

    Polak, K C; Levy, J K; Crawford, P C; Leutenegger, C M; Moriello, K A

    2014-08-01

    Animal hoarders accumulate animals in over-crowded conditions without adequate nutrition, sanitation, and veterinary care. As a result, animals rescued from hoarding frequently have a variety of medical conditions including respiratory infections, gastrointestinal disease, parasitism, malnutrition, and other evidence of neglect. The purpose of this study was to characterize the infectious diseases carried by clinically affected cats and to determine the prevalence of retroviral infections among cats in large-scale cat hoarding investigations. Records were reviewed retrospectively from four large-scale seizures of cats from failed sanctuaries from November 2009 through March 2012. The number of cats seized in each case ranged from 387 to 697. Cats were screened for feline leukemia virus (FeLV) and feline immunodeficiency virus (FIV) in all four cases and for dermatophytosis in one case. A subset of cats exhibiting signs of upper respiratory disease or diarrhea had been tested for infections by PCR and fecal flotation for treatment planning. Mycoplasma felis (78%), calicivirus (78%), and Streptococcus equi subspecies zooepidemicus (55%) were the most common respiratory infections. Feline enteric coronavirus (88%), Giardia (56%), Clostridium perfringens (49%), and Tritrichomonas foetus (39%) were most common in cats with diarrhea. The seroprevalence of FeLV and FIV were 8% and 8%, respectively. In the one case in which cats with lesions suspicious for dermatophytosis were cultured for Microsporum canis, 69/76 lesional cats were culture-positive; of these, half were believed to be truly infected and half were believed to be fomite carriers. Cats from large-scale hoarding cases had high risk for enteric and respiratory infections, retroviruses, and dermatophytosis. Case responders should be prepared for mass treatment of infectious diseases and should implement protocols to prevent transmission of feline or zoonotic infections during the emergency response and when

  2. Statistical analysis of large-scale neuronal recording data

    PubMed Central

    Reed, Jamie L.; Kaas, Jon H.

    2010-01-01

    Relating stimulus properties to the response properties of individual neurons and neuronal networks is a major goal of sensory research. Many investigators implant electrode arrays in multiple brain areas and record from chronically implanted electrodes over time to answer a variety of questions. Technical challenges related to analyzing large-scale neuronal recording data are not trivial. Several analysis methods traditionally used by neurophysiologists do not account for dependencies in the data that are inherent in multi-electrode recordings. In addition, when neurophysiological data are not best modeled by the normal distribution and when the variables of interest may not be linearly related, extensions of the linear modeling techniques are recommended. A variety of methods exist to analyze correlated data, even when data are not normally distributed and the relationships are nonlinear. Here we review expansions of the Generalized Linear Model designed to address these data properties. Such methods are used in other research fields, and the application to large-scale neuronal recording data will enable investigators to determine the variable properties that convincingly contribute to the variances in the observed neuronal measures. Standard measures of neuron properties such as response magnitudes can be analyzed using these methods, and measures of neuronal network activity such as spike timing correlations can be analyzed as well. We have done just that in recordings from 100-electrode arrays implanted in the primary somatosensory cortex of owl monkeys. Here we illustrate how one example method, Generalized Estimating Equations analysis, is a useful method to apply to large-scale neuronal recordings. PMID:20472395

  3. The Large-Scale Current System During Auroral Substorms

    NASA Astrophysics Data System (ADS)

    Gjerloev, Jesper

    2015-04-01

    The substorm process has been discussed for more than four decades and new empirical large-scale models continue to be published. The continued activity implies both the importance and the complexity of the problem. We recently published a new model of the large-scale substorm current system (Gjerloev and Hoffman, JGR, 2014). Based on data from >100 ground magnetometers (obtained from SuperMAG), 116 isolated substorms, global auroral images (obtained by the Polar VIS Earth Camera) and a careful normalization technique we derived an empirical model of the ionospheric equivalent current system. Our model yield some unexpected features that appear inconsistent with the classical single current wedge current system. One of these features is a distinct latitudinal shift of the westward electrojet (WEJ) current between the pre- and post-midnight region and we find evidence that these two WEJ regions are quasi disconnected. This, and other observational facts, led us to propose a modified 3D current system configuration that consists of 2 wedge type systems: a current wedge in the pre-midnight region (bulge current wedge), and another current wedge system in the post-midnight region (oval current wedge). The two wedge systems are shifted in latitude but overlap in local time in the midnight region. Our model is at considerable variance with previous global models and conceptual schematics of the large-scale substorm current system. We speculate that the data coverage, the methodologies and the techniques used in these previous global studies are the cause of the differences in solutions. In this presentation we present our model, compare with other published models and discuss possible causes for the differences.

  4. Solving large scale structure in ten easy steps with COLA

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J.

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 109Msolar/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 1011Msolar/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  5. Improving Design Efficiency for Large-Scale Heterogeneous Circuits

    NASA Astrophysics Data System (ADS)

    Gregerson, Anthony

    Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency

  6. LARGE-SCALE CO2 TRANSPORTATION AND DEEP OCEAN SEQUESTRATION

    SciTech Connect

    Hamid Sarv

    1999-03-01

    Technical and economical feasibility of large-scale CO{sub 2} transportation and ocean sequestration at depths of 3000 meters or grater was investigated. Two options were examined for transporting and disposing the captured CO{sub 2}. In one case, CO{sub 2} was pumped from a land-based collection center through long pipelines laid on the ocean floor. Another case considered oceanic tanker transport of liquid carbon dioxide to an offshore floating structure for vertical injection to the ocean floor. In the latter case, a novel concept based on subsurface towing of a 3000-meter pipe, and attaching it to the offshore structure was considered. Budgetary cost estimates indicate that for distances greater than 400 km, tanker transportation and offshore injection through a 3000-meter vertical pipe provides the best method for delivering liquid CO{sub 2} to deep ocean floor depressions. For shorter distances, CO{sub 2} delivery by parallel-laid, subsea pipelines is more cost-effective. Estimated costs for 500-km transport and storage at a depth of 3000 meters by subsea pipelines and tankers were 1.5 and 1.4 dollars per ton of stored CO{sub 2}, respectively. At these prices, economics of ocean disposal are highly favorable. Future work should focus on addressing technical issues that are critical to the deployment of a large-scale CO{sub 2} transportation and disposal system. Pipe corrosion, structural design of the transport pipe, and dispersion characteristics of sinking CO{sub 2} effluent plumes have been identified as areas that require further attention. Our planned activities in the next Phase include laboratory-scale corrosion testing, structural analysis of the pipeline, analytical and experimental simulations of CO{sub 2} discharge and dispersion, and the conceptual economic and engineering evaluation of large-scale implementation.

  7. Large-Scale periodic solar velocities: An observational study

    NASA Technical Reports Server (NTRS)

    Dittmer, P. H.

    1977-01-01

    Observations of large-scale solar velocities were made using the mean field telescope and Babcock magnetograph of the Stanford Solar Observatory. Observations were made in the magnetically insensitive ion line at 5124 A, with light from the center (limb) of the disk right (left) circularly polarized, so that the magnetograph measures the difference in wavelength between center and limb. Computer calculations are made of the wavelength difference produced by global pulsations for spherical harmonics up to second order and of the signal produced by displacing the solar image relative to polarizing optics or diffraction grating.

  8. Large-scale sodium spray fire code validation (SOFICOV) test

    SciTech Connect

    Jeppson, D.W.; Muhlestein, L.D.

    1985-01-01

    A large-scale, sodium, spray fire code validation test was performed in the HEDL 850-m/sup 3/ Containment System Test Facility (CSTF) as part of the Sodium Spray Fire Code Validation (SOFICOV) program. Six hundred fifty eight kilograms of sodium spray was sprayed in an air atmosphere for a period of 2400 s. The sodium spray droplet sizes and spray pattern distribution were estimated. The containment atmosphere temperature and pressure response, containment wall temperature response and sodium reaction rate with oxygen were measured. These results are compared to post-test predictions using SPRAY and NACOM computer codes.

  9. UAV Data Processing for Large Scale Topographical Mapping

    NASA Astrophysics Data System (ADS)

    Tampubolon, W.; Reinhardt, W.

    2014-06-01

    Large scale topographical mapping in the third world countries is really a prominent challenge in geospatial industries nowadays. On one side the demand is significantly increasing while on the other hand it is constrained by limited budgets available for mapping projects. Since the advent of Act Nr.4/yr.2011 about Geospatial Information in Indonesia, large scale topographical mapping has been on high priority for supporting the nationwide development e.g. detail spatial planning. Usually large scale topographical mapping relies on conventional aerial survey campaigns in order to provide high resolution 3D geospatial data sources. Widely growing on a leisure hobby, aero models in form of the so-called Unmanned Aerial Vehicle (UAV) bring up alternative semi photogrammetric aerial data acquisition possibilities suitable for relatively small Area of Interest (AOI) i.e. <5,000 hectares. For detail spatial planning purposes in Indonesia this area size can be used as a mapping unit since it usually concentrates on the basis of sub district area (kecamatan) level. In this paper different camera and processing software systems will be further analyzed for identifying the best optimum UAV data acquisition campaign components in combination with the data processing scheme. The selected AOI is covering the cultural heritage of Borobudur Temple as one of the Seven Wonders of the World. A detailed accuracy assessment will be concentrated within the object feature of the temple at the first place. Feature compilation involving planimetric objects (2D) and digital terrain models (3D) will be integrated in order to provide Digital Elevation Models (DEM) as the main interest of the topographic mapping activity. By doing this research, incorporating the optimum amount of GCPs in the UAV photo data processing will increase the accuracy along with its high resolution in 5 cm Ground Sampling Distance (GSD). Finally this result will be used as the benchmark for alternative geospatial

  10. Large scale obscuration and related climate effects open literature bibliography

    SciTech Connect

    Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.

    1994-05-01

    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ``Nuclear Winter Controversy`` in the early 1980`s. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest.

  11. Water-based scintillators for large-scale liquid calorimetry

    SciTech Connect

    Winn, D.R.; Raftery, D.

    1985-02-01

    We have investigated primary and secondary solvent intermediates in search of a recipe to create a bulk liquid scintillator with water as the bulk solvent and common fluors as the solutes. As we are not concerned with energy resolution below 1 MeV in large-scale experiments, light-output at the 10% level of high-quality organic solvent based scintillators is acceptable. We have found encouraging performance from industrial surfactants as primary solvents for PPO and POPOP. This technique may allow economical and environmentally safe bulk scintillator for kiloton-sized high energy calorimetry.

  12. Enabling Large-Scale Biomedical Analysis in the Cloud

    PubMed Central

    Lin, Ying-Chih; Yu, Chin-Sheng; Lin, Yen-Jen

    2013-01-01

    Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable. PMID:24288665

  13. Large-Scale Measurement of Absolute Protein Glycosylation Stoichiometry.

    PubMed

    Sun, Shisheng; Zhang, Hui

    2015-07-07

    Protein glycosylation is one of the most important protein modifications. Glycosylation site occupancy alteration has been implicated in human diseases and cancers. However, current glycoproteomic methods focus on the identification and quantification of glycosylated peptides and glycosylation sites but not glycosylation occupancy or glycoform stoichiometry. Here we describe a method for large-scale determination of the absolute glycosylation stoichiometry using three independent relative ratios. Using this method, we determined 117 absolute N-glycosylation occupancies in OVCAR-3 cells. Finally, we investigated the possible functions and the determinants for partial glycosylation.

  14. Large scale mortality of nestling ardeids caused by nematode infection.

    PubMed

    Wiese, J H; Davidson, W R; Nettles, V F

    1977-10-01

    During the summer of 1976, an epornitic of verminous peritonitis caused by Eustrongylides ignotus resulted in large scale mortality of young herons and egrets on Pea Patch Island, Delaware. Mortality was highest (84%) in snowy egret nestlings ( Egretta thula ) and less severe in great egrets ( Casmerodius albus ), Louisiana herons ( Hydranassa tricolor ), little blue herons ( Florida caerulea ), and black crowned night herons ( Nycticorax nycticorax ). Most deaths occured within the first 4 weeks after hatching. Migration of E. ignotus resulted in multiple perforations of the visceral organs, escape of intestinal contents into the body cavity and subsequent bacterial peritonitis. Killifish ( Fundulus heteroclitus ) served as the source of infective larvae.

  15. Integrated High Accuracy Portable Metrology for Large Scale Structural Testing

    NASA Astrophysics Data System (ADS)

    Klaas, Andrej; Richardson, Paul; Burguete, Richard; Harris, Linden

    2014-06-01

    As the performance and accuracy of analysis tools increases bespoke solutions are more regularly being requested to perform high-accuracy measurement on structural tests to validate these methods. These can include optical methods and full-field techniques in place of the more traditional point measurements. As each test is unique it presents its own individual challenges.In this paper two recent, large scale tests performed by Airbus, will be presented and the metrology solutions that were identified for them will be discussed.

  16. Large-scale normal fluid circulation in helium superflows

    NASA Astrophysics Data System (ADS)

    Galantucci, Luca; Sciacca, Michele; Barenghi, Carlo F.

    2017-01-01

    We perform fully coupled numerical simulations of helium II pure superflows in a channel, with vortex-line density typical of experiments. Peculiar to our model is the computation of the back-reaction of the superfluid vortex motion on the normal fluid and the presence of solid boundaries. We recover the uniform vortex-line density experimentally measured employing second sound resonators and we show that pure superflow in helium II is associated with a large-scale circulation of the normal fluid which can be detected using existing particle-tracking visualization techniques.

  17. Large-scale genotoxicity assessments in the marine environment.

    PubMed Central

    Hose, J E

    1994-01-01

    There are a number of techniques for detecting genotoxicity in the marine environment, and many are applicable to large-scale field assessments. Certain tests can be used to evaluate responses in target organisms in situ while others utilize surrogate organisms exposed to field samples in short-term laboratory bioassays. Genotoxicity endpoints appear distinct from traditional toxicity endpoints, but some have chemical or ecotoxicologic correlates. One versatile end point, the frequency of anaphase aberrations, has been used in several large marine assessments to evaluate genotoxicity in the New York Bight, in sediment from San Francisco Bay, and following the Exxon Valdez oil spill. PMID:7713029

  18. Towards large scale production and separation of carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Alvarez, Noe T.

    Since their discovery, carbon nanotubes (CNTs) have boosted the research and applications of nanotechnology; however, many applications of CNTs are inaccessible because they depend upon large-scale CNT production and separations. Type, chirality and diameter control of CNTs determine many of their physical properties, and such control is still not accesible. This thesis studies the fundamentals for scalable selective reactions of HiPCo CNTs as well as the early phase of routes to an inexpensive approach for large-scale CNT production. In the growth part, this thesis covers a complete wet-chemistry process of catalyst and catalyst support deposition for growth of vertically aligned (VA) CNTs. A wet-chemistry preparation process has significant importance for CNT synthesis through chemical vapor deposition (CVD). CVD is by far, the most suitable and inexpensive process for large-scale CNT production when compared to other common processes such as laser ablation and arc discharge. However, its potential has been limited by low-yielding and difficult preparation processes of catalyst and its support, therefore its competitiveness has been reduced. The wet-chemistry process takes advantage of current nanoparticle technology to deposit the catalyst and the catalyst support as a thin film of nanoparticles, making the protocol simple compared to electron beam evaporation and sputtering processes. In the CNT selective reactions part, this thesis studies UV irradiation of individually dispersed HiPCo CNTs that generates auto-selective reactions in the liquid phase with good control over their diameter and chirality. This technique is ideal for large-scale and continuous-process of separations of CNTs by diameter and type. Additionally, an innovative simple catalyst deposition through abrasion is demonstrated. Simple friction between the catalyst and the substrates deposit a high enough density of metal catalyst particles for successful CNT growth. This simple approach has

  19. Clusters as cornerstones of large-scale structure.

    NASA Astrophysics Data System (ADS)

    Gottlöber, S.; Retzlaff, J.; Turchaninov, V.

    1997-04-01

    Galaxy clusters are one of the best tracers of large-scale structure in the Universe on scales well above 100 Mpc. The authors investigate here the clustering properties of a redshift sample of Abell/ACO clusters and compare the observational sample with mock samples constructed from N-body simulations on the basis of four different cosmological models. The authors discuss the power spectrum, the Minkowski functionals and the void statistics of these samples and conclude, that the SCDM and TCDM models are ruled out whereas the ACDM and BSI models are in agreement with the observational data.

  20. Large-Scale Patterns of Filament Channels and Filaments

    NASA Astrophysics Data System (ADS)

    Mackay, Duncan

    2016-07-01

    In this review the properties and large-scale patterns of filament channels and filaments will be considered. Initially, the global formation locations of filament channels and filaments are discussed, along with their hemispheric pattern. Next, observations of the formation of filament channels and filaments are described where two opposing views are considered. Finally, the wide range of models that have been constructed to consider the formation of filament channels and filaments over long time-scales are described, along with the origin of the hemispheric pattern of filaments.

  1. Quantum computation for large-scale image classification

    NASA Astrophysics Data System (ADS)

    Ruan, Yue; Chen, Hanwu; Tan, Jianing; Li, Xi

    2016-10-01

    Due to the lack of an effective quantum feature extraction method, there is currently no effective way to perform quantum image classification or recognition. In this paper, for the first time, a global quantum feature extraction method based on Schmidt decomposition is proposed. A revised quantum learning algorithm is also proposed that will classify images by computing the Hamming distance of these features. From the experimental results derived from the benchmark database Caltech 101, and an analysis of the algorithm, an effective approach to large-scale image classification is derived and proposed against the background of big data.

  2. Large-scale genotoxicity assessments in the marine environment

    SciTech Connect

    Hose, J.E.

    1994-12-01

    There are a number of techniques for detecting genotoxicity in the marine environment, and many are applicable to large-scale field assessments. Certain tests can be used to evaluate responses in target organisms in situ while others utilize surrogate organisms exposed to field samples in short-term laboratory bioassays. Genotoxicity endpoints appear distinct from traditional toxicity endpoints, but some have chemical or ecotoxicologic correlates. One versatile end point, the frequency of anaphase aberrations, has been used in several large marine assessments to evaluate genotoxicity in the New York Bight, in sediment from San Francisco Bay, and following the Exxon Valdez oil spill. 31 refs., 2 tabs.

  3. Solving Large-scale Eigenvalue Problems in SciDACApplications

    SciTech Connect

    Yang, Chao

    2005-06-29

    Large-scale eigenvalue problems arise in a number of DOE applications. This paper provides an overview of the recent development of eigenvalue computation in the context of two SciDAC applications. We emphasize the importance of Krylov subspace methods, and point out its limitations. We discuss the value of alternative approaches that are more amenable to the use of preconditioners, and report the progression using the multi-level algebraic sub-structuring techniques to speed up eigenvalue calculation. In addition to methods for linear eigenvalue problems, we also examine new approaches to solving two types of non-linear eigenvalue problems arising from SciDAC applications.

  4. Analysis Plan for 1985 Large-Scale Tests.

    DTIC Science & Technology

    1983-01-01

    KEY WORDS (Continue on reverse side it necessary mnd Identify by block number) Large-Scale Blasting Agents Multiburst ANFO S:,ock Waves 20. ABSTRACT...CONSIDERATIONS 6 1.5 MULTIBURST TECHNIQUES 6 1.6 TEST SITE CONSIDERATIONS 6 2 CANDIDATE EXPLOSIVES 8 2.1 INTRODUCTION 82.2 ANFO 8 2.2.1 Bulk (Loose) ANFO 11...2.2.2 Bagged ANFO 13 2.3 APEX 1360 15 2.4 NITRIC ACID AND NITROPROPANE 17 2.5 NITROPROPANENITRATE (NPN) 19 2.6 DBA - 22M 21 2.7 HARDENING EMULSION 22 2.8

  5. Frequency domain multiplexing for large-scale bolometer arrays

    SciTech Connect

    Spieler, Helmuth

    2002-05-31

    The development of planar fabrication techniques for superconducting transition-edge sensors has brought large-scale arrays of 1000 pixels or more to the realm of practicality. This raises the problem of reading out a large number of sensors with a tractable number of connections. A possible solution is frequency-domain multiplexing. I summarize basic principles, present various circuit topologies, and discuss design trade-offs, noise performance, cross-talk and dynamic range. The design of a practical device and its readout system is described with a discussion of fabrication issues, practical limits and future prospects.

  6. A Modular Ring Architecture for Large Scale Neural Network Implementations

    NASA Astrophysics Data System (ADS)

    Jump, Lance B.; Ligomenides, Panos A.

    1989-11-01

    Constructing fully parallel, large scale, neural networks is complicated by the problems of providing for massive interconnectivity and of overcoming fan in/out limitations in area-efficient VLSI/WSI realizations. A modular, bus switched, neural ring architecture employing primitive ring (pRing) processors is proposed, which solves the fan in/out and connectivity problems by a dynamically reconfigurable communication ring that synchronously serves identical, radially connected, processing elements. It also allows cost versus performance trade-offs by the assignment of variable numbers of logical neurons to each physical processing element.

  7. Design of a large-scale CFB boiler

    SciTech Connect

    Darling, S.; Li, S.

    1997-12-31

    Many CFB boilers sized 100--150 MWe are in operation, and several others sized 150--250 MWe are in operation or under construction. The next step for CFB technology is the 300--400 MWe size range. This paper will describe Foster Wheeler`s large-scale CFB boiler experience and the design for a 300 MWe CFB boiler. The authors will show how the design incorporates Foster Wheeler`s unique combination of extensive utility experience and CFB boiler experience. All the benefits of CFB technology which include low emissions, fuel flexibility, low maintenance and competitive cost are now available in the 300--400 MWe size range.

  8. Simplified DGS procedure for large-scale genome structural study.

    PubMed

    Jung, Yong-Chul; Xu, Jia; Chen, Jun; Kim, Yeong; Winchester, David; Wang, San Ming

    2009-11-01

    Ditag genome scanning (DGS) uses next-generation DNA sequencing to sequence the ends of ditag fragments produced by restriction enzymes. These sequences are compared to known genome sequences to determine their structure. In order to use DGS for large-scale genome structural studies, we have substantially revised the original protocol by replacing the in vivo genomic DNA cloning with in vitro adaptor ligation, eliminating the ditag concatemerization steps, and replacing the 454 sequencer with Solexa or SOLiD sequencers for ditag sequence collection. This revised protocol further increases genome coverage and resolution and allows DGS to be used to analyze multiple genomes simultaneously.

  9. Large-Scale Compton Imaging for Wide-Area Surveillance

    SciTech Connect

    Lange, D J; Manini, H A; Wright, D M

    2006-03-01

    We study the performance of a large-scale Compton imaging detector placed in a low-flying aircraft, used to search wide areas for rad/nuc threat sources. In this paper we investigate the performance potential of equipping aerial platforms with gamma-ray detectors that have photon sensitivity up to a few MeV. We simulate the detector performance, and present receiver operating characteristics (ROC) curves for a benchmark scenario using a {sup 137}Cs source. The analysis uses a realistic environmental background energy spectrum and includes air attenuation.

  10. Decentrally stabilizable linear and bilinear large-scale systems

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Vukcevic, M. B.

    1977-01-01

    Two classes of large-scale systems are identified, which can always be stabilized by decentralized feedback control. For the class of systems composed of interconnected linear subsystems, we can choose local controllers for the subsystems to achieve stability of the overall system. The same linear feedback scheme can be used to stabilize a class of linear systems with bilinear interconnections. In this case, however, the scheme is used to establish a finite region of stability for the overall system. The stabilization algorithm is applied to the design of a control system for the Large-Space Telescope.

  11. Large-scale structure from wiggly cosmic strings

    NASA Astrophysics Data System (ADS)

    Vachaspati, Tanmay; Vilenkin, Alexander

    1991-08-01

    Recent simulations of the evolution of cosmic strings indicate the presence of small-scale structure on the strings. It is shown that wakes produced by such 'wiggly' cosmic strings can result in the efficient formation of large-scale structure and large streaming velocities in the universe without significantly affecting the microwave-background isotropy. It is also argued that the motion of strings will lead to the generation of a primordial magnetic field. The most promising version of this scenario appears to be the one in which the universe is dominated by light neutrinos.

  12. Structure and function of large-scale brain systems.

    PubMed

    Koziol, Leonard F; Barker, Lauren A; Joyce, Arthur W; Hrin, Skip

    2014-01-01

    This article introduces the functional neuroanatomy of large-scale brain systems. Both the structure and functions of these brain networks are presented. All human behavior is the result of interactions within and between these brain systems. This system of brain function completely changes our understanding of how cognition and behavior are organized within the brain, replacing the traditional lesion model. Understanding behavior within the context of brain network interactions has profound implications for modifying abstract constructs such as attention, learning, and memory. These constructs also must be understood within the framework of a paradigm shift, which emphasizes ongoing interactions within a dynamically changing environment.

  13. Investigation of the Large Scale Evolution and Topology of Coronal Mass Ejections in the Solar Wind

    NASA Technical Reports Server (NTRS)

    Riley, Peter

    1999-01-01

    This investigation is concerned with the large-scale evolution and topology of Coronal Mass Ejections (CMEs) in the solar wind. During this reporting period we have analyzed a series of low density intervals in the ACE (Advanced Composition Explorer) plasma data set that bear many similarities to CMEs. We have begun a series of 3D, MHD (Magnetohydrodynamics) coronal models to probe potential causes of these events. We also edited two manuscripts concerning the properties of CMEs in the solar wind. One was re-submitted to the Journal of Geophysical Research.

  14. Cost-Driven Design of a Large Scale X-Plane

    NASA Technical Reports Server (NTRS)

    Welstead, Jason R.; Frederic, Peter C.; Frederick, Michael A.; Jacobson, Steven R.; Berton, Jeffrey J.

    2017-01-01

    A conceptual design process focused on the development of a low-cost, large scale X-plane was developed as part of an internal research and development effort. One of the concepts considered for this process was the double-bubble configuration recently developed as an advanced single-aisle class commercial transport similar in size to a Boeing 737-800 or Airbus A320. The study objective was to reduce the contractor cost from contract award to first test flight to less than $100 million, and having the first flight within three years of contract award. Methods and strategies for reduced cost are discussed.

  15. Contributions to the understanding of large-scale coherent structures in developing free turbulent shear flows

    NASA Technical Reports Server (NTRS)

    Liu, J. T. C.

    1986-01-01

    Advances in the mechanics of boundary layer flow are reported. The physical problems of large scale coherent structures in real, developing free turbulent shear flows, from the nonlinear aspects of hydrodynamic stability are addressed. The presence of fine grained turbulence in the problem, and its absence, lacks a small parameter. The problem is presented on the basis of conservation principles, which are the dynamics of the problem directed towards extracting the most physical information, however, it is emphasized that it must also involve approximations.

  16. Large-Scale Demonstration of Liquid Hydrogen Storage with Zero Boiloff for In-Space Applications

    NASA Technical Reports Server (NTRS)

    Hastings, L. J.; Bryant, C. B.; Flachbart, R. H.; Holt, K. A.; Johnson, E.; Hedayat, A.; Hipp, B.; Plachta, D. W.

    2010-01-01

    Cryocooler and passive insulation technology advances have substantially improved prospects for zero-boiloff cryogenic storage. Therefore, a cooperative effort by NASA s Ames Research Center, Glenn Research Center, and Marshall Space Flight Center (MSFC) was implemented to develop zero-boiloff concepts for in-space cryogenic storage. Described herein is one program element - a large-scale, zero-boiloff demonstration using the MSFC multipurpose hydrogen test bed (MHTB). A commercial cryocooler was interfaced with an existing MHTB spray bar mixer and insulation system in a manner that enabled a balance between incoming and extracted thermal energy.

  17. The combustion behavior of large scale lithium titanate battery

    PubMed Central

    Huang, Peifeng; Wang, Qingsong; Li, Ke; Ping, Ping; Sun, Jinhua

    2015-01-01

    Safety problem is always a big obstacle for lithium battery marching to large scale application. However, the knowledge on the battery combustion behavior is limited. To investigate the combustion behavior of large scale lithium battery, three 50 Ah Li(NixCoyMnz)O2/Li4Ti5O12 batteries under different state of charge (SOC) were heated to fire. The flame size variation is depicted to analyze the combustion behavior directly. The mass loss rate, temperature and heat release rate are used to analyze the combustion behavior in reaction way deeply. Based on the phenomenon, the combustion process is divided into three basic stages, even more complicated at higher SOC with sudden smoke flow ejected. The reason is that a phase change occurs in Li(NixCoyMnz)O2 material from layer structure to spinel structure. The critical temperatures of ignition are at 112–121°C on anode tab and 139 to 147°C on upper surface for all cells. But the heating time and combustion time become shorter with the ascending of SOC. The results indicate that the battery fire hazard increases with the SOC. It is analyzed that the internal short and the Li+ distribution are the main causes that lead to the difference. PMID:25586064

  18. Very large-scale motions in a turbulent pipe flow

    NASA Astrophysics Data System (ADS)

    Lee, Jae Hwa; Jang, Seong Jae; Sung, Hyung Jin

    2011-11-01

    Direct numerical simulation of a turbulent pipe flow with ReD=35000 was performed to investigate the spatially coherent structures associated with very large-scale motions. The corresponding friction Reynolds number, based on pipe radius R, is R+=934, and the computational domain length is 30 R. The computed mean flow statistics agree well with previous DNS data at ReD=44000 and 24000. Inspection of the instantaneous fields and two-point correlation of the streamwise velocity fluctuations showed that the very long meandering motions exceeding 25R exist in logarithmic and wake regions, and the streamwise length scale is almost linearly increased up to y/R ~0.3, while the structures in the turbulent boundary layer only reach up to the edge of the log-layer. Time-resolved instantaneous fields revealed that the hairpin packet-like structures grow with continuous stretching along the streamwise direction and create the very large-scale structures with meandering in the spanwise direction, consistent with the previous conceptual model of Kim & Adrian (1999). This work was supported by the Creative Research Initiatives of NRF/MEST of Korea (No. 2011-0000423).

  19. Topographically Engineered Large Scale Nanostructures for Plasmonic Biosensing

    PubMed Central

    Xiao, Bo; Pradhan, Sangram K.; Santiago, Kevin C.; Rutherford, Gugu N.; Pradhan, Aswini K.

    2016-01-01

    We demonstrate that a nanostructured metal thin film can achieve enhanced transmission efficiency and sharp resonances and use a large-scale and high-throughput nanofabrication technique for the plasmonic structures. The fabrication technique combines the features of nanoimprint and soft lithography to topographically construct metal thin films with nanoscale patterns. Metal nanogratings developed using this method show significantly enhanced optical transmission (up to a one-order-of-magnitude enhancement) and sharp resonances with full width at half maximum (FWHM) of ~15nm in the zero-order transmission using an incoherent white light source. These nanostructures are sensitive to the surrounding environment, and the resonance can shift as the refractive index changes. We derive an analytical method using a spatial Fourier transformation to understand the enhancement phenomenon and the sensing mechanism. The use of real-time monitoring of protein-protein interactions in microfluidic cells integrated with these nanostructures is demonstrated to be effective for biosensing. The perpendicular transmission configuration and large-scale structures provide a feasible platform without sophisticated optical instrumentation to realize label-free surface plasmon resonance (SPR) sensing. PMID:27072067

  20. Semantic overlay network for large-scale spatial information indexing

    NASA Astrophysics Data System (ADS)

    Zou, Zhiqiang; Wang, Yue; Cao, Kai; Qu, Tianshan; Wang, Zhongmin

    2013-08-01

    The increased demand for online services of spatial information poses new challenges to the combined filed of Computer Science and Geographic Information Science. Amongst others, these include fast indexing of spatial data in distributed networks. In this paper we propose a novel semantic overlay network for large-scale multi-dimensional spatial information indexing, called SON_LSII, which has a hybrid structure integrating a semantic quad-tree and Chord ring. The SON_LSII is a small world overlay network that achieves a very competitive trade-off between indexing efficiency and maintenance overhead. To create SON_LSII, we use an effective semantic clustering strategy that considers two aspects, i.e., the semantic of spatial information that peer holds in overlay network and physical network performances. Based on SON_LSII, a mapping method is used to reduce the multi-dimensional features into a single dimension and an efficient indexing algorithm is presented to support complex range queries of the spatial information with a massive number of concurrent users. The results from extensive experiments demonstrate that SON_LSII is superior to existing overlay networks in various respects, including scalability, maintenance, rate of indexing hits, indexing logical hops, and adaptability. Thus, the proposed SON_LSII can be used for large-scale spatial information indexing.

  1. The combustion behavior of large scale lithium titanate battery.

    PubMed

    Huang, Peifeng; Wang, Qingsong; Li, Ke; Ping, Ping; Sun, Jinhua

    2015-01-14

    Safety problem is always a big obstacle for lithium battery marching to large scale application. However, the knowledge on the battery combustion behavior is limited. To investigate the combustion behavior of large scale lithium battery, three 50 Ah Li(Ni(x)Co(y)Mn(z))O2/Li(4)Ti(5)O(12) batteries under different state of charge (SOC) were heated to fire. The flame size variation is depicted to analyze the combustion behavior directly. The mass loss rate, temperature and heat release rate are used to analyze the combustion behavior in reaction way deeply. Based on the phenomenon, the combustion process is divided into three basic stages, even more complicated at higher SOC with sudden smoke flow ejected. The reason is that a phase change occurs in Li(Ni(x)Co(y)Mn(z))O2 material from layer structure to spinel structure. The critical temperatures of ignition are at 112-121 °C on anode tab and 139 to 147 °C on upper surface for all cells. But the heating time and combustion time become shorter with the ascending of SOC. The results indicate that the battery fire hazard increases with the SOC. It is analyzed that the internal short and the Li(+) distribution are the main causes that lead to the difference.

  2. Maestro: An Orchestration Framework for Large-Scale WSN Simulations

    PubMed Central

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  3. Large-Scale Low-Boom Inlet Test Overview

    NASA Technical Reports Server (NTRS)

    Hirt, Stefanie

    2011-01-01

    This presentation provides a high level overview of the Large-Scale Low-Boom Inlet Test and was presented at the Fundamental Aeronautics 2011 Technical Conference. In October 2010 a low-boom supersonic inlet concept with flow control was tested in the 8'x6' supersonic wind tunnel at NASA Glenn Research Center (GRC). The primary objectives of the test were to evaluate the inlet stability and operability of a large-scale low-boom supersonic inlet concept by acquiring performance and flowfield validation data, as well as evaluate simple, passive, bleedless inlet boundary layer control options. During this effort two models were tested: a dual stream inlet intended to model potential flight hardware and a single stream design to study a zero-degree external cowl angle and to permit surface flow visualization of the vortex generator flow control on the internal centerbody surface. The tests were conducted by a team of researchers from NASA GRC, Gulfstream Aerospace Corporation, University of Illinois at Urbana-Champaign, and the University of Virginia

  4. Wall turbulence manipulation by large-scale streamwise vortices

    NASA Astrophysics Data System (ADS)

    Iuso, Gaetano; Onorato, Michele; Spazzini, Pier Giorgio; di Cicca, Gaetano Maria

    2002-12-01

    This paper describes an experimental study of the manipulation of a fully developed turbulent channel flow through large-scale streamwise vortices originated by vortex generator jets distributed along the wall in the spanwise direction. Apart from the interest in flow management itself, an important aim of the research is to observe the response of the flow to external perturbations as a technique for investigating the structure of turbulence. Considerable mean and fluctuating skin friction reductions, locally as high as 30% and 50% respectively, were measured for an optimal forcing flow intensity. Mean and fluctuating velocity profiles are also greatly modified by the manipulating large-scale vortices; in particular, attenuation of the turbulence intensity was measured. Moreover the flow manipulation caused an increase in longitudinal coherence of the wall organized motions, accompanied by a reduced frequency of burst events, demonstrated by a reduction of the velocity time derivative PDFs and by an higher intermittency. A strong transversal periodic organization of the flow field was observed, including some typical behaviours in each of the periodic boxes originated by the interaction of the vortex pairs. Results are interpreted and discussed in terms of management of the near-wall turbulent structures and with reference to the wall turbulence regeneration mechanisms suggested in the literature.

  5. Power suppression at large scales in string inflation

    SciTech Connect

    Cicoli, Michele; Downes, Sean; Dutta, Bhaskar E-mail: sddownes@physics.tamu.edu

    2013-12-01

    We study a possible origin of the anomalous suppression of the power spectrum at large angular scales in the cosmic microwave background within the framework of explicit string inflationary models where inflation is driven by a closed string modulus parameterizing the size of the extra dimensions. In this class of models the apparent power loss at large scales is caused by the background dynamics which involves a sharp transition from a fast-roll power law phase to a period of Starobinsky-like slow-roll inflation. An interesting feature of this class of string inflationary models is that the number of e-foldings of inflation is inversely proportional to the string coupling to a positive power. Therefore once the string coupling is tuned to small values in order to trust string perturbation theory, enough e-foldings of inflation are automatically obtained without the need of extra tuning. Moreover, in the less tuned cases the sharp transition responsible for the power loss takes place just before the last 50-60 e-foldings of inflation. We illustrate these general claims in the case of Fibre Inflation where we study the strength of this transition in terms of the attractor dynamics, finding that it induces a pivot from a blue to a redshifted power spectrum which can explain the apparent large scale power loss. We compute the effects of this pivot for example cases and demonstrate how magnitude and duration of this effect depend on model parameters.

  6. Knocking down highly-ordered large-scale nanowire arrays.

    PubMed

    Pevzner, Alexander; Engel, Yoni; Elnathan, Roey; Ducobni, Tamir; Ben-Ishai, Moshit; Reddy, Koteeswara; Shpaisman, Nava; Tsukernik, Alexander; Oksman, Mark; Patolsky, Fernando

    2010-04-14

    The large-scale assembly of nanowire elements with controlled and uniform orientation and density at spatially well-defined locations on solid substrates presents one of the most significant challenges facing their integration in real-world electronic applications. Here, we present the universal "knocking-down" approach, based on the controlled in-place planarization of nanowire elements, for the formation of large-scale ordered nanowire arrays. The controlled planarization of the nanowires is achieved by the use of an appropriate elastomer-covered rigid-roller device. After being knocked down, each nanowire in the array can be easily addressed electrically, by a simple single photolithographic step, to yield a large number of nanoelectrical devices with an unprecedented high-fidelity rate. The approach allows controlling, in only two simple steps, all possible array parameters, that is, nanowire dimensions, chemical composition, orientation, and density. The resulting knocked-down arrays can be further used for the creation of massive nanoelectronic-device arrays. More than million devices were already fabricated with yields over 98% on substrate areas of up, but not limited to, to 10 cm(2).

  7. IP over optical multicasting for large-scale video delivery

    NASA Astrophysics Data System (ADS)

    Jin, Yaohui; Hu, Weisheng; Sun, Weiqiang; Guo, Wei

    2007-11-01

    In the IPTV systems, multicasting will play a crucial role in the delivery of high-quality video services, which can significantly improve bandwidth efficiency. However, the scalability and the signal quality of current IPTV can barely compete with the existing broadcast digital TV systems since it is difficult to implement large-scale multicasting with end-to-end guaranteed quality of service (QoS) in packet-switched IP network. China 3TNet project aimed to build a high performance broadband trial network to support large-scale concurrent streaming media and interactive multimedia services. The innovative idea of 3TNet is that an automatic switched optical networks (ASON) with the capability of dynamic point-to-multipoint (P2MP) connections replaces the conventional IP multicasting network in the transport core, while the edge remains an IP multicasting network. In this paper, we will introduce the network architecture and discuss challenges in such IP over Optical multicasting for video delivery.

  8. High Speed Networking and Large-scale Simulation in Geodynamics

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Gary, Patrick; Seablom, Michael; Truszkowski, Walt; Odubiyi, Jide; Jiang, Weiyuan; Liu, Dong

    2004-01-01

    Large-scale numerical simulation has been one of the most important approaches for understanding global geodynamical processes. In this approach, peta-scale floating point operations (pflops) are often required to carry out a single physically-meaningful numerical experiment. For example, to model convective flow in the Earth's core and generation of the geomagnetic field (geodynamo), simulation for one magnetic free-decay time (approximately 15000 years) with a modest resolution of 150 in three spatial dimensions would require approximately 0.2 pflops. If such a numerical model is used to predict geomagnetic secular variation over decades and longer, with e.g. an ensemble Kalman filter assimilation approach, approximately 30 (and perhaps more) independent simulations of similar scales would be needed for one data assimilation analysis. Obviously, such a simulation would require an enormous computing resource that exceeds the capacity of a single facility currently available at our disposal. One solution is to utilize a very fast network (e.g. 10Gb optical networks) and available middleware (e.g. Globus Toolkit) to allocate available but often heterogeneous resources for such large-scale computing efforts. At NASA GSFC, we are experimenting with such an approach by networking several clusters for geomagnetic data assimilation research. We shall present our initial testing results in the meeting.

  9. Maestro: an orchestration framework for large-scale WSN simulations.

    PubMed

    Riliskis, Laurynas; Osipov, Evgeny

    2014-03-18

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.

  10. Detecting differential protein expression in large-scale population proteomics

    SciTech Connect

    Ryu, Soyoung; Qian, Weijun; Camp, David G.; Smith, Richard D.; Tompkins, Ronald G.; Davis, Ronald W.; Xiao, Wenzhong

    2014-06-17

    Mass spectrometry-based high-throughput quantitative proteomics shows great potential in clinical biomarker studies, identifying and quantifying thousands of proteins in biological samples. However, methods are needed to appropriately handle issues/challenges unique to mass spectrometry data in order to detect as many biomarker proteins as possible. One issue is that different mass spectrometry experiments generate quite different total numbers of quantified peptides, which can result in more missing peptide abundances in an experiment with a smaller total number of quantified peptides. Another issue is that the quantification of peptides is sometimes absent, especially for less abundant peptides and such missing values contain the information about the peptide abundance. Here, we propose a Significance Analysis for Large-scale Proteomics Studies (SALPS) that handles missing peptide intensity values caused by the two mechanisms mentioned above. Our model has a robust performance in both simulated data and proteomics data from a large clinical study. Because varying patients’ sample qualities and deviating instrument performances are not avoidable for clinical studies performed over the course of several years, we believe that our approach will be useful to analyze large-scale clinical proteomics data.

  11. Brief Mental Training Reorganizes Large-Scale Brain Networks

    PubMed Central

    Tang, Yi-Yuan; Tang, Yan; Tang, Rongxiang; Lewis-Peacock, Jarrod A.

    2017-01-01

    Emerging evidences have shown that one form of mental training—mindfulness meditation, can improve attention, emotion regulation and cognitive performance through changing brain activity and structural connectivity. However, whether and how the short-term mindfulness meditation alters large-scale brain networks are not well understood. Here, we applied a novel data-driven technique, the multivariate pattern analysis (MVPA) to resting-state fMRI (rsfMRI) data to identify changes in brain activity patterns and assess the neural mechanisms induced by a brief mindfulness training—integrative body–mind training (IBMT), which was previously reported in our series of randomized studies. Whole brain rsfMRI was performed on an undergraduate group who received 2 weeks of IBMT with 30 min per session (5 h training in total). Classifiers were trained on measures of functional connectivity in this fMRI data, and they were able to reliably differentiate (with 72% accuracy) patterns of connectivity from before vs. after the IBMT training. After training, an increase in positive functional connections (60 connections) were detected, primarily involving bilateral superior/middle occipital gyrus, bilateral frontale operculum, bilateral superior temporal gyrus, right superior temporal pole, bilateral insula, caudate and cerebellum. These results suggest that brief mental training alters the functional connectivity of large-scale brain networks at rest that may involve a portion of the neural circuitry supporting attention, cognitive and affective processing, awareness and sensory integration and reward processing. PMID:28293180

  12. Large Scale Organization of a Near Wall Turbulent Boundary Layer

    NASA Astrophysics Data System (ADS)

    Stanislas, Michel; Dekou Tiomajou, Raoul Florent; Foucaut, Jean Marc

    2016-11-01

    This study lies in the context of large scale coherent structures investigation in a near wall turbulent boundary layer. An experimental database at high Reynolds numbers (Re θ = 9830 and Re θ = 19660) was obtained in the LML wind tunnel with stereo-PIV at 4 Hz and hot wire anemometry at 30 kHz. A Linear Stochastic Estimation procedure, is used to reconstruct a 3 component field resolved in space and time. Algorithms were developed to extract coherent structures from the reconstructed field. A sample of 3D view of the structures is depicted in Figure 1. Uniform momentum regions are characterized with their mean hydraulic diameter in the YZ plane, their life time and their contribution to Reynolds stresses. The vortical motions are characterized by their position, radius, circulation and vorticity in addition to their life time and their number computed at a fixed position from the wall. The spatial organization of the structures was investigated through a correlation of their respective indicative functions in the spanwise direction. The simplified large scale model that arise is compared to the ones available in the literature. Streamwise low (green) and high (yellow) uniform momentum regions with positive (red) and negative (blue) vortical motions. This work was supported by Campus International pour la Sécurité et l'Intermodalité des Transports.

  13. Ecohydrological modeling for large-scale environmental impact assessment.

    PubMed

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model.

  14. The Impact of Large Scale Environments on Cluster Entropy Profiles

    NASA Astrophysics Data System (ADS)

    Trierweiler, Isabella; Su, Yuanyuan

    2017-01-01

    We perform a systematic analysis of 21 clusters imaged by the Suzaku satellite to determine the relation between the richness of cluster environments and entropy at large radii. Entropy profiles for clusters are expected to follow a power-law, but Suzaku observations show that the entropy profiles of many clusters are significantly flattened beyond 0.3 Rvir. While the entropy at the outskirts of clusters is thought to be highly dependent on the large scale cluster environment, the exact nature of the environment/entropy relation is unclear. Using the Sloan Digital Sky Survey and 6dF Galaxy Survey, we study the 20 Mpc large scale environment for all clusters in our sample. We find no strong relation between the entropy deviations at the virial radius and the total luminosity of the cluster surroundings, indicating that accretion and mergers have a more complex and indirect influence on the properties of the gas at large radii. We see a possible anti-correlation between virial temperature and richness of the cluster environment and find that density excess appears to play a larger role in the entropy flattening than temperature, suggesting that clumps of gas can lower entropy.

  15. The effective field theory of cosmological large scale structures

    SciTech Connect

    Carrasco, John Joseph M.; Hertzberg, Mark P.; Senatore, Leonardo

    2012-09-20

    Large scale structure surveys will likely become the next leading cosmological probe. In our universe, matter perturbations are large on short distances and small at long scales, i.e. strongly coupled in the UV and weakly coupled in the IR. To make precise analytical predictions on large scales, we develop an effective field theory formulated in terms of an IR effective fluid characterized by several parameters, such as speed of sound and viscosity. These parameters, determined by the UV physics described by the Boltzmann equation, are measured from N-body simulations. We find that the speed of sound of the effective fluid is c2s ≈ 10–6c2 and that the viscosity contributions are of the same order. The fluid describes all the relevant physics at long scales k and permits a manifestly convergent perturbative expansion in the size of the matter perturbations δ(k) for all the observables. As an example, we calculate the correction to the power spectrum at order δ(k)4. As a result, the predictions of the effective field theory are found to be in much better agreement with observation than standard cosmological perturbation theory, already reaching percent precision at this order up to a relatively short scale k ≃ 0.24h Mpc–1.

  16. Large-scale Direct Targeting for Drug Repositioning and Discovery

    PubMed Central

    Zheng, Chunli; Guo, Zihu; Huang, Chao; Wu, Ziyin; Li, Yan; Chen, Xuetong; Fu, Yingxue; Ru, Jinlong; Ali Shar, Piar; Wang, Yuan; Wang, Yonghua

    2015-01-01

    A system-level identification of drug-target direct interactions is vital to drug repositioning and discovery. However, the biological means on a large scale remains challenging and expensive even nowadays. The available computational models mainly focus on predicting indirect interactions or direct interactions on a small scale. To address these problems, in this work, a novel algorithm termed weighted ensemble similarity (WES) has been developed to identify drug direct targets based on a large-scale of 98,327 drug-target relationships. WES includes: (1) identifying the key ligand structural features that are highly-related to the pharmacological properties in a framework of ensemble; (2) determining a drug’s affiliation of a target by evaluation of the overall similarity (ensemble) rather than a single ligand judgment; and (3) integrating the standardized ensemble similarities (Z score) by Bayesian network and multi-variate kernel approach to make predictions. All these lead WES to predict drug direct targets with external and experimental test accuracies of 70% and 71%, respectively. This shows that the WES method provides a potential in silico model for drug repositioning and discovery. PMID:26155766

  17. Modulation of energetic coherent motions by large-scale topography

    NASA Astrophysics Data System (ADS)

    Lai, Wing; Hamed, Ali M.; Troolin, Dan; Chamorro, Leonardo P.

    2016-11-01

    The distinctive characteristics and dynamics of the large-scale coherent motions induced over 2D and 3D large-scale wavy walls were explored experimentally with time-resolved volumetric PIV, and selected wall-normal high-resolution stereo PIV in a refractive-index-matching channel. The 2D wall consists of a sinusoidal wave in the streamwise direction with amplitude to wavelength ratio a/ λx = 0.05, while the 3D wall has an additional wave in the spanwise direction with a/ λy = 0.1. The ?ow was characterized at Re 8000, based on the bulk velocity and the channel half height. The walls are such that the amplitude to boundary layer thickness ratio is a/ δ99 0.1, which resemble geophysical-like topography. Insight on the dynamics of the coherent motions, Reynolds stress and spatial interaction of sweep and ejection events will be discussed in terms of the wall topography modulation.

  18. Very-large-scale coherent motions in open channel flows

    NASA Astrophysics Data System (ADS)

    Zhong, Qiang; Hussain, Fazle; Li, Dan-Xun

    2016-11-01

    Very-large-scale coherent structures (VLSSs) - whose characteristic length is of the order of 10 h (h is the water depth) - are found to exist in the log and outer layers near the bed of open channel flows. For decades researchers have speculated that large coherent structures may exist in open channel flows. However, conclusive evidence is still lacking. The present study employed pre-multiplied velocity power spectral and co-spectral analyses of time-resolved PIV data obtained in open channel flows. In all cases, two modes - large-scale structures (of the order of h) and VLSSs - dominate the log and outer layers of the turbulent boundary layer. More than half of TKE and 40% of the Reynolds shear stress in the log and outer layers are contributed by VLSSs. The strength difference of VLSSs between open and closed channel flows leads to pronounced redistribution of TKE near the free surface of open channel flows, which is a unique phenomenon that sets the open channel flows apart from other wall-bounded turbulent flows. Funded by China Postdoctoral Science Foundation (No.2015M580105), National Natural Science Foundation of China (No.51127006).

  19. Resonant plankton patchiness induced by large-scale turbulent flow

    NASA Astrophysics Data System (ADS)

    McKiver, William J.; Neufeld, Zoltán

    2011-01-01

    Here we study how large-scale variability of oceanic plankton is affected by mesoscale turbulence in a spatially heterogeneous environment. We consider a phytoplankton-zooplankton (PZ) ecosystem model, with different types of zooplankton grazing functions, coupled to a turbulent flow described by the two-dimensional Navier-Stokes equations, representing large-scale horizontal transport in the ocean. We characterize the system using a dimensionless parameter, γ=TB/TF, which is the ratio of the ecosystem biological time scale TB and the flow time scale TF. Through numerical simulations, we examine how the PZ system depends on the time-scale ratio γ and find that the variance of both species changes significantly, with maximum phytoplankton variability at intermediate mixing rates. Through an analysis of the linearized population dynamics, we find an analytical solution based on the forced harmonic oscillator, which explains the behavior of the ecosystem, where there is resonance between the advection and the ecosystem predator-prey dynamics when the forcing time scales match the ecosystem time scales. We also examine the dependence of the power spectra on γ and find that the resonance behavior leads to different spectral slopes for phytoplankton and zooplankton, in agreement with observations.

  20. Resonant plankton patchiness induced by large-scale turbulent flow.

    PubMed

    McKiver, William J; Neufeld, Zoltán

    2011-01-01

    Here we study how large-scale variability of oceanic plankton is affected by mesoscale turbulence in a spatially heterogeneous environment. We consider a phytoplankton-zooplankton (PZ) ecosystem model, with different types of zooplankton grazing functions, coupled to a turbulent flow described by the two-dimensional Navier-Stokes equations, representing large-scale horizontal transport in the ocean. We characterize the system using a dimensionless parameter, γ=T(B)/T(F), which is the ratio of the ecosystem biological time scale T(B) and the flow time scale T(F). Through numerical simulations, we examine how the PZ system depends on the time-scale ratio γ and find that the variance of both species changes significantly, with maximum phytoplankton variability at intermediate mixing rates. Through an analysis of the linearized population dynamics, we find an analytical solution based on the forced harmonic oscillator, which explains the behavior of the ecosystem, where there is resonance between the advection and the ecosystem predator-prey dynamics when the forcing time scales match the ecosystem time scales. We also examine the dependence of the power spectra on γ and find that the resonance behavior leads to different spectral slopes for phytoplankton and zooplankton, in agreement with observations.