Science.gov

Sample records for advanced large-scale manufacturing

  1. Large Scale Composite Manufacturing for Heavy Lift Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Stavana, Jacob; Cohen, Leslie J.; Houseal, Keth; Pelham, Larry; Lort, Richard; Zimmerman, Thomas; Sutter, James; Western, Mike; Harper, Robert; Stuart, Michael

    2012-01-01

    Risk reduction for the large scale composite manufacturing is an important goal to produce light weight components for heavy lift launch vehicles. NASA and an industry team successfully employed a building block approach using low-cost Automated Tape Layup (ATL) of autoclave and Out-of-Autoclave (OoA) prepregs. Several large, curved sandwich panels were fabricated at HITCO Carbon Composites. The aluminum honeycomb core sandwich panels are segments of a 1/16th arc from a 10 meter cylindrical barrel. Lessons learned highlight the manufacturing challenges required to produce light weight composite structures such as fairings for heavy lift launch vehicles.

  2. Large-Scale Advanced Prop-Fan (LAP)

    NASA Technical Reports Server (NTRS)

    Degeorge, C. L.

    1988-01-01

    In recent years, considerable attention has been directed toward improving aircraft fuel efficiency. Analytical studies and research with wind tunnel models have demonstrated that the high inherent efficiency of low speed turboprop propulsion systems may now be extended to the Mach .8 flight regime of today's commercial airliners. This can be accomplished with a propeller, employing a large number of thin highly swept blades. The term Prop-Fan has been coined to describe such a propulsion system. In 1983 the NASA-Lewis Research Center contracted with Hamilton Standard to design, build and test a near full scale Prop-Fan, designated the Large Scale Advanced Prop-Fan (LAP). This report provides a detailed description of the LAP program. The assumptions and analytical procedures used in the design of Prop-Fan system components are discussed in detail. The manufacturing techniques used in the fabrication of the Prop-Fan are presented. Each of the tests run during the course of the program are also discussed and the major conclusions derived from them stated.

  3. Manufacturing Process Simulation of Large-Scale Cryotanks

    NASA Technical Reports Server (NTRS)

    Babai, Majid; Phillips, Steven; Griffin, Brian

    2003-01-01

    NASA's Space Launch Initiative (SLI) is an effort to research and develop the technologies needed to build a second-generation reusable launch vehicle. It is required that this new launch vehicle be 100 times safer and 10 times cheaper to operate than current launch vehicles. Part of the SLI includes the development of reusable composite and metallic cryotanks. The size of these reusable tanks is far greater than anything ever developed and exceeds the design limits of current manufacturing tools. Several design and manufacturing approaches have been formulated, but many factors must be weighed during the selection process. Among these factors are tooling reachability, cycle times, feasibility, and facility impacts. The manufacturing process simulation capabilities available at NASA.s Marshall Space Flight Center have played a key role in down selecting between the various manufacturing approaches. By creating 3-D manufacturing process simulations, the varying approaches can be analyzed in a virtual world before any hardware or infrastructure is built. This analysis can detect and eliminate costly flaws in the various manufacturing approaches. The simulations check for collisions between devices, verify that design limits on joints are not exceeded, and provide cycle times which aide in the development of an optimized process flow. In addition, new ideas and concerns are often raised after seeing the visual representation of a manufacturing process flow. The output of the manufacturing process simulations allows for cost and safety comparisons to be performed between the various manufacturing approaches. This output helps determine which manufacturing process options reach the safety and cost goals of the SLI. As part of the SLI, The Boeing Company was awarded a basic period contract to research and propose options for both a metallic and a composite cryotank. Boeing then entered into a task agreement with the Marshall Space Flight Center to provide manufacturing

  4. Manufacturing Process Simulation of Large-Scale Cryotanks

    NASA Technical Reports Server (NTRS)

    Babai, Majid; Phillips, Steven; Griffin, Brian; Munafo, Paul M. (Technical Monitor)

    2002-01-01

    NASA's Space Launch Initiative (SLI) is an effort to research and develop the technologies needed to build a second-generation reusable launch vehicle. It is required that this new launch vehicle be 100 times safer and 10 times cheaper to operate than current launch vehicles. Part of the SLI includes the development of reusable composite and metallic cryotanks. The size of these reusable tanks is far greater than anything ever developed and exceeds the design limits of current manufacturing tools. Several design and manufacturing approaches have been formulated, but many factors must be weighed during the selection process. Among these factors are tooling reachability, cycle times, feasibility, and facility impacts. The manufacturing process simulation capabilities available at NASA's Marshall Space Flight Center have played a key role in down selecting between the various manufacturing approaches. By creating 3-D manufacturing process simulations, the varying approaches can be analyzed in a virtual world before any hardware or infrastructure is built. This analysis can detect and eliminate costly flaws in the various manufacturing approaches. The simulations check for collisions between devices, verify that design limits on joints are not exceeded, and provide cycle times which aid in the development of an optimized process flow. In addition, new ideas and concerns are often raised after seeing the visual representation of a manufacturing process flow. The output of the manufacturing process simulations allows for cost and safety comparisons to be performed between the various manufacturing approaches. This output helps determine which manufacturing process options reach the safety and cost goals of the SLI.

  5. Active assembly for large-scale manufacturing of integrated nanostructures.

    SciTech Connect

    Spoerke, Erik David; Bunker, Bruce Conrad; Orendorff, Christopher J.; Bachand, George David; Hendricks, Judy K.; Matzke, Carolyn M.

    2007-01-01

    Microtubules and motor proteins are protein-based biological agents that work cooperatively to facilitate the organization and transport of nanomaterials within living organisms. This report describes the application of these biological agents as tools in a novel, interdisciplinary scheme for assembling integrated nanostructures. Specifically, selective chemistries were used to direct the favorable adsorption of active motor proteins onto lithographically-defined gold electrodes. Taking advantage of the specific affinity these motor proteins have for microtubules, the motor proteins were used to capture polymerized microtubules out of suspension to form dense patterns of microtubules and microtubule bridges between gold electrodes. These microtubules were then used as biofunctionalized templates to direct the organization of functionalized nanocargo including single-walled carbon nanotubes and gold nanoparticles. This biologically-mediated scheme for nanomaterials assembly has shown excellent promise as a foundation for developing new biohybrid approaches to nanoscale manufacturing.

  6. Case Study: Commercialization of sweet sorghum juice clarification for large-scale syrup manufacture

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The precipitation and burning of insoluble granules of starch from sweet sorghum juice on heating coils prevented the large scale manufacture of syrup at a new industrial plant in Missouri, USA. To remove insoluble starch granules, a series of small and large-scale experiments were conducted at the...

  7. Large-Scale Advanced Prop-Fan (LAP) blade design

    NASA Technical Reports Server (NTRS)

    Violette, John A.; Sullivan, William E.; Turnberg, Jay E.

    1984-01-01

    This report covers the design analysis of a very thin, highly swept, propeller blade to be used in the Large-Scale Advanced Prop-Fan (LAP) test program. The report includes: design requirements and goals, a description of the blade configuration which meets requirements, a description of the analytical methods utilized/developed to demonstrate compliance with the requirements, and the results of these analyses. The methods described include: finite element modeling, predicted aerodynamic loads and their application to the blade, steady state and vibratory response analyses, blade resonant frequencies and mode shapes, bird impact analysis, and predictions of stalled and unstalled flutter phenomena. Summarized results include deflections, retention loads, stress/strength comparisons, foreign object damage resistance, resonant frequencies and critical speed margins, resonant vibratory mode shapes, calculated boundaries of stalled and unstalled flutter, and aerodynamic and acoustic performance calculations.

  8. Current situation of the development and manufacture of vary large scale integrated devices in China

    NASA Astrophysics Data System (ADS)

    Yubiao, He

    1988-06-01

    The manufacture of Large Scale Integration (LSI) and Very Large Scale Integration (VLSI) devices in foreign countries is a highly competitive high-tech industry. It requires high-precision manufacturing technology, and very expensive manufacturing equipment. Therefore, it is impossible to conduct research and form industrial production capability by merely relying on obsolete manufacturing equipment and semi-manual production techniques. According to the experience of our foreign counterparts and based on our current situation, it is highly desirable for domestic LSI and VLSI research institutes and manufacturers to establish unified development-manufacturing units, concentrate resources, amass available funds to upgrade equipment and technology, improve management, conduct theoretical research, and develop new technology and new devices under a unified planning and assigned responsibility. It is only in this way that we can reduce the gap between domestic and foreign VLSI device industries, and promote our micro-electronic industry. This should be the trend for the development of the microelectronic industry in China.

  9. Large-scale Advanced Prop-fan (LAP) technology assessment report

    NASA Technical Reports Server (NTRS)

    Degeorge, C. L.

    1988-01-01

    The technologically significant findings and accomplishments of the Large Scale Advanced Prop-Fan (LAP) program in the areas of aerodynamics, aeroelasticity, acoustics and materials and fabrication are described. The extent to which the program goals related to these disciplines were achieved is discussed, and recommendations for additional research are presented. The LAP program consisted of the design, manufacture and testing of a near full-scale Prop-Fan or advanced turboprop capable of operating efficiently at speeds to Mach .8. An aeroelastically scaled model of the LAP was also designed and fabricated. The goal of the program was to acquire data on Prop-Fan performance that would indicate the technology readiness of Prop-Fans for practical applications in commercial and military aviation.

  10. Advancing Software Architecture Modeling for Large Scale Heterogeneous Systems

    SciTech Connect

    Gorton, Ian; Liu, Yan

    2010-11-07

    In this paper we describe how incorporating technology-specific modeling at the architecture level can help reduce risks and produce better designs for large, heterogeneous software applications. We draw an analogy with established modeling approaches in scientific domains, using groundwater modeling as an example, to help illustrate gaps in current software architecture modeling approaches. We then describe the advances in modeling, analysis and tooling that are required to bring sophisticated modeling and development methods within reach of software architects.

  11. Large-scale adeno-associated viral vector production using a herpesvirus-based system enables manufacturing for clinical studies.

    PubMed

    Clément, Nathalie; Knop, David R; Byrne, Barry J

    2009-08-01

    The ability of recombinant adeno-associated viral (rAAV) vectors to exhibit minimal immunogenicity and little to no toxicity or inflammation while eliciting robust, multiyear gene expression in vivo are only a few of the salient features that make them ideally suited for many gene therapy applications. A major hurdle for the use of rAAV in sizeable research and clinical applications is the lack of efficient and versatile large-scale production systems. Continued progression toward flexible, scalable production techniques is a prerequisite to support human clinical evaluation of these novel biotherapeutics. This review examines the current state of large-scale production methods that employ the herpes simplex virus type 1 (HSV) platform to produce rAAV vectors for gene delivery. Improvements have substantially advanced the HSV/AAV hybrid method for large-scale rAAV manufacture, facilitating the generation of highly potent, clinical-grade purity rAAV vector stocks. At least one human clinical trial employing rAAV generated via rHSV helper-assisted replication is poised to commence, highlighting the advances and relevance of this production method. PMID:19569968

  12. Cluman: Advanced cluster management for the large-scale infrastructures

    NASA Astrophysics Data System (ADS)

    Babik, Marian; Fedorko, Ivan; Rodrigues, David

    2011-12-01

    The recent uptake of multi-core computing has produced a rapid growth of virtualisation and cloud computing services. With the increased use of the many-core processors this trend will likely accelerate and computing centres will be faced with the management of the tens of thousands of the virtual machines. Furthermore, these machines will likely be geographically distributed and need to be allocated on demand. In order to cope with such complexity we have designed and developed an advanced cluster management system that can execute administrative tasks targeting thousands of machines as well as provide an interactive high-density visualisation of the fabrics. The job management subsystem can perform complex tasks while following their progress and output and report aggregated information back to the system administrators. The visualisation subsystem can display tree maps of the infrastructure elements with data and monitoring information, thus providing a very detailed overview of the large clusters at a glance. The initial experience with development and testing of the system will be presented as well as an evaluation of its performance.

  13. Advanced Manufacturing Technologies

    NASA Technical Reports Server (NTRS)

    Fikes, John

    2016-01-01

    Advanced Manufacturing Technologies (AMT) is developing and maturing innovative and advanced manufacturing technologies that will enable more capable and lower-cost spacecraft, launch vehicles and infrastructure to enable exploration missions. The technologies will utilize cutting edge materials and emerging capabilities including metallic processes, additive manufacturing, composites, and digital manufacturing. The AMT project supports the National Manufacturing Initiative involving collaboration with other government agencies.

  14. Large-scale photonic integration for advanced all-optical routing functions

    NASA Astrophysics Data System (ADS)

    Nicholes, Steven C.

    Advanced InP-based photonic integrated circuits are a critical technology to manage the increasing bandwidth demands of next-generation all-optical networks. Integrating many of the discrete functions required in optical networks into a single device provides a reduction in system footprint and optical losses by eliminating the fiber coupling junctions between components. This translates directly into increased system reliability and cost savings. Although many key network components have been realized via InP-based monolithic integration over the years, truly large-scale photonic ICs have only recently emerged in the marketplace. This lag-time has been mostly due to historically low device yields. In all-optical routing applications, large-scale photonic ICs may be able to address two of the key roadblocks associated with scaling modern electronic routers to higher capacities---namely, power and size. If the functions of dynamic wavelength conversion and routing are moved to the optical layer, we can eliminate the need for power-hungry optical-to-electrical (O/E) and electrical-to-optical (E/O) data conversions at each router node. Additionally, large-scale photonic ICs could reduce the footprint of such a system by combining the similar functions of each port onto a single chip. However, robust design and manufacturing techniques that will enable high-yield production of these chips must be developed. In this work, we demonstrate a monolithic tunable optical router (MOTOR) chip consisting of an array of eight 40-Gbps wavelength converters and a passive arrayed-waveguide grating router that functions as the packet-forwarding switch fabric of an all-optical router. The device represents one of the most complex InP photonic ICs ever reported, with more than 200 integrated functional elements in a single chip. Single-channel 40 Gbps wavelength conversion and channel switching using 231-1 PRBS data showed a power penalty as low as 4.5 dB with less than 2 W drive power

  15. Spraying Techniques for Large Scale Manufacturing of PEM-FC Electrodes

    NASA Astrophysics Data System (ADS)

    Hoffman, Casey J.

    Fuel cells are highly efficient energy conversion devices that represent one part of the solution to the world's current energy crisis in the midst of global climate change. When supplied with the necessary reactant gasses, fuel cells produce only electricity, heat, and water. The fuel used, namely hydrogen, is available from many sources including natural gas and the electrolysis of water. If the electricity for electrolysis is generated by renewable energy (e.g., solar and wind power), fuel cells represent a completely 'green' method of producing electricity. The thought of being able to produce electricity to power homes, vehicles, and other portable or stationary equipment with essentially zero environmentally harmful emissions has been driving academic and industrial fuel cell research and development with the goal of successfully commercializing this technology. Unfortunately, fuel cells cannot achieve any appreciable market penetration at their current costs. The author's hypothesis is that: the development of automated, non-contact deposition methods for electrode manufacturing will improve performance and process flexibility, thereby helping to accelerate the commercialization of PEMFC technology. The overarching motivation for this research was to lower the cost of manufacturing fuel cell electrodes and bring the technology one step closer to commercial viability. The author has proven this hypothesis through a detailed study of two non-contact spraying methods. These scalable deposition systems were incorporated into an automated electrode manufacturing system that was designed and built by the author for this research. The electrode manufacturing techniques developed by the author have been shown to produce electrodes that outperform a common lab-scale contact method that was studied as a baseline, as well as several commercially available electrodes. In addition, these scalable, large scale electrode manufacturing processes developed by the author are

  16. Drivers and barriers to e-invoicing adoption in Greek large scale manufacturing industries

    SciTech Connect

    Marinagi, Catherine E-mail: ptrivel@yahoo.com Trivellas, Panagiotis E-mail: ptrivel@yahoo.com Reklitis, Panagiotis E-mail: ptrivel@yahoo.com; Skourlas, Christos

    2015-02-09

    This paper attempts to investigate the drivers and barriers that large-scale Greek manufacturing industries experience in adopting electronic invoices (e-invoices), based on three case studies with organizations having international presence in many countries. The study focuses on the drivers that may affect the increase of the adoption and use of e-invoicing, including the customers demand for e-invoices, and sufficient know-how and adoption of e-invoicing in organizations. In addition, the study reveals important barriers that prevent the expansion of e-invoicing, such as suppliers’ reluctance to implement e-invoicing, and IT infrastructures incompatibilities. Other issues examined by this study include the observed benefits from e-invoicing implementation, and the financial priorities of the organizations assumed to be supported by e-invoicing.

  17. Using Large-Scale Databases in Evaluation: Advances, Opportunities, and Challenges

    ERIC Educational Resources Information Center

    Penuel, William R.; Means, Barbara

    2011-01-01

    Major advances in the number, capabilities, and quality of state, national, and transnational databases have opened up new opportunities for evaluators. Both large-scale data sets collected for administrative purposes and those collected by other researchers can provide data for a variety of evaluation-related activities. These include (a)…

  18. Advanced Connectivity Analysis (ACA): a Large Scale Functional Connectivity Data Mining Environment.

    PubMed

    Chen, Rong; Nixon, Erika; Herskovits, Edward

    2016-04-01

    Using resting-state functional magnetic resonance imaging (rs-fMRI) to study functional connectivity is of great importance to understand normal development and function as well as a host of neurological and psychiatric disorders. Seed-based analysis is one of the most widely used rs-fMRI analysis methods. Here we describe a freely available large scale functional connectivity data mining software package called Advanced Connectivity Analysis (ACA). ACA enables large-scale seed-based analysis and brain-behavior analysis. It can seamlessly examine a large number of seed regions with minimal user input. ACA has a brain-behavior analysis component to delineate associations among imaging biomarkers and one or more behavioral variables. We demonstrate applications of ACA to rs-fMRI data sets from a study of autism. PMID:26662457

  19. Challenges and advances in large-scale DFT calculations on GPUs

    NASA Astrophysics Data System (ADS)

    Kulik, Heather

    2014-03-01

    Recent advances in reformulating electronic structure algorithms for stream processors such as graphical processing units have made DFT calculations on systems comprising up to O(103) atoms feasible. Simulations on such systems that previously required half a week on traditional processors can now be completed in only half an hour. Here, we leverage these GPU-accelerated quantum chemistry methods to investigate large-scale quantum mechanical features in protein structure, mechanochemical depolymerization, and the nucleation and growth of heterogeneous nanoparticle structures. In each case, large-scale and rapid evaluation of electronic structure properties is critical for unearthing previously poorly understood properties and mechanistic features of these systems. We will also discuss outstanding challenges in the use of Gaussian localized-basis-set codes on GPUs pertaining to limitations in basis set size and how we circumvent such challenges to computational efficiency with systematic, physics-based error corrections to basis set incompleteness.

  20. Advances in Parallelization for Large Scale Oct-Tree Mesh Generation

    NASA Technical Reports Server (NTRS)

    O'Connell, Matthew; Karman, Steve L.

    2015-01-01

    Despite great advancements in the parallelization of numerical simulation codes over the last 20 years, it is still common to perform grid generation in serial. Generating large scale grids in serial often requires using special "grid generation" compute machines that can have more than ten times the memory of average machines. While some parallel mesh generation techniques have been proposed, generating very large meshes for LES or aeroacoustic simulations is still a challenging problem. An automated method for the parallel generation of very large scale off-body hierarchical meshes is presented here. This work enables large scale parallel generation of off-body meshes by using a novel combination of parallel grid generation techniques and a hybrid "top down" and "bottom up" oct-tree method. Meshes are generated using hardware commonly found in parallel compute clusters. The capability to generate very large meshes is demonstrated by the generation of off-body meshes surrounding complex aerospace geometries. Results are shown including a one billion cell mesh generated around a Predator Unmanned Aerial Vehicle geometry, which was generated on 64 processors in under 45 minutes.

  1. Large-Scale Advanced Prop-Fan (LAP) pitch change actuator and control design report

    NASA Technical Reports Server (NTRS)

    Schwartz, R. A.; Carvalho, P.; Cutler, M. J.

    1986-01-01

    In recent years, considerable attention has been directed toward improving aircraft fuel consumption. Studies have shown that the high inherent efficiency previously demonstrated by low speed turboprop propulsion systems may now be extended to today's higher speed aircraft if advanced high-speed propeller blades having thin airfoils and aerodynamic sweep are utilized. Hamilton Standard has designed a 9-foot diameter single-rotation Large-Scale Advanced Prop-Fan (LAP) which will be tested on a static test stand, in a high speed wind tunnel and on a research aircraft. The major objective of this testing is to establish the structural integrity of large-scale Prop-Fans of advanced construction in addition to the evaluation of aerodynamic performance and aeroacoustic design. This report describes the operation, design features and actual hardware of the (LAP) Prop-Fan pitch control system. The pitch control system which controls blade angle and propeller speed consists of two separate assemblies. The first is the control unit which provides the hydraulic supply, speed governing and feather function for the system. The second unit is the hydro-mechanical pitch change actuator which directly changes blade angle (pitch) as scheduled by the control.

  2. Novel method to construct large-scale design space in lubrication process utilizing Bayesian estimation based on a small-scale design-of-experiment and small sets of large-scale manufacturing data.

    PubMed

    Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo

    2012-12-01

    A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale. PMID:22356256

  3. System design and integration of the large-scale advanced prop-fan

    NASA Technical Reports Server (NTRS)

    Huth, B. P.

    1986-01-01

    In recent years, considerable attention has been directed toward improving aircraft fuel consumption. Studies have shown that blades with thin airfoils and aerodynamic sweep extend the inherent efficiency advantage that turboprop propulsion systems have demonstrated to the higher speed to today's aircraft. Hamilton Standard has designed a 9-foot diameter single-rotation Prop-Fan. It will test the hardware on a static test stand, in low speed and high speed wind tunnels and on a research aircraft. The major objective of this testing is to establish the structural integrity of large scale Prop-Fans of advanced construction, in addition to the evaluation of aerodynamic performance and the aeroacoustic design. The coordination efforts performed to ensure smooth operation and assembly of the Prop-Fan are summarized. A summary of the loads used to size the system components, the methodology used to establish material allowables and a review of the key analytical results are given.

  4. Multicriteria assessment of advanced treatment technologies for micropollutants removal at large-scale applications.

    PubMed

    Bui, X T; Vo, T P T; Ngo, H H; Guo, W S; Nguyen, T T

    2016-09-01

    With the introduction and discharge of thousands of new micropollutants (MPs) every year, traditional water and wastewater treatment plants may be incapable of tackling them all. With their low concentrations and diversity in nature, MP removal encounters numerous challenges. Although some MPs are effectively eliminated via conventional treatment methods, most of them can easily escape and are retained in the discharged effluent. Therefore, advanced methods such as (i) adsorption, (ii) oxidation and advanced oxidation processes (O3 and O3-based advanced oxidation processes, UV/H2O2), (iii) membrane processes, and (iv) membrane bioreactors, become an inevitable approach. Despite the unsurprisingly vast number of papers on MP treatment available at present, most of these studies were carried out at a laboratory scale while only a few pilot- and full-scale studies have experimented. Nevertheless, an in-depth assessment of real-world MP treatment methods is extremely crucial for practitioners. To date, no paper has been dedicated to look at this issue. Therefore, this paper aims to review these large-scale treatment methods. First, the paper goes through the regulations and standards which deal with MPs in water courses. It will then assess these methods in various case-studies with reference to different criteria towards serving as a reference for further practical applications. PMID:27198651

  5. Large-scale Advanced Prop-fan (LAP) static rotor test report

    NASA Technical Reports Server (NTRS)

    Degeorge, Charles L.; Turnberg, Jay E.; Wainauski, Harry S.

    1987-01-01

    Discussed is Static Rotor Testing of the SR-7L Large Scale Advanced Prop-Fan (LAP). The LAP is an advanced 9 foot diameter, 8 bladed propeller designed and built by Hamilton Standard under contract to the NASA Lewis Research Center. The Prop-Fan employs thin swept blades to provide efficient propulsion at flight speeds up to Mach .85. Static Testing was conducted on a 10,000 HP whirl rig at Wright Patterson Air Force Base. The test objectives were to investigate the Prop-Fan static aerodynamic and structural dynamic performance, determine the blade steady state stressers and deflections and to measure steady and unsteady pressures on the SR-7L blade surface. The measured performance of the LAP correlated well with analytical predictions at blade pitch angles below 30 deg. A stall buffet phenomenon was observed at blade pitch angles above 30 deg. This phenomenon manifested itself by elevated blade vibratory stress levels and lower than expected thrust produced and power absorbed by the Prop-Fan for a given speed and blade angle.

  6. Large-scale Advanced Prop-fan (LAP) hub/blade retention design report

    NASA Technical Reports Server (NTRS)

    Soule, Matthew

    1986-01-01

    The Large-scale Advanced Prop-fan (LAP) hub assembly forms a semi-rigid link between the blades, which provide the thrust, and the engine shaft, which provides the torque. The hub and tailshaft is a one piece partially forged part which is carburized, heat treated and machined. A single row ball bearing restrains each of the eight blades in the hub, while the tailshaft secures the propeller to the engine shaft with two cone seats that are preloaded against each other by the Prop-fan retaining nut. The hub also forms the support for the pitch change actuator system, the control and the spinner. The retention transmits the loads from the blades to the hub while allowing the changes in blade pitch. The single row ball bearing retention provides ease of maintenance by allowing individual blade replacement without dissassembly of the hub. It has a through hardened inner race which seats against the aluminum blade shank and an outer race which is integral with the barrel. The outer race area is carburized to achieve the hardness necessary to support the ball loads. The balls are kept from contact with each other by a separator. The rotational speed of the propeller keeps the retention submerged in the oil which is contained in the hub by a seal. Stress and strain analysis, material hardness requirements, weight predictions, and stiffness characteristics are discussed.

  7. Large-scale Advanced Prop-fan (LAP) high speed wind tunnel test report

    NASA Technical Reports Server (NTRS)

    Campbell, William A.; Wainauski, Harold S.; Arseneaux, Peter J.

    1988-01-01

    High Speed Wind Tunnel testing of the SR-7L Large Scale Advanced Prop-Fan (LAP) is reported. The LAP is a 2.74 meter (9.0 ft) diameter, 8-bladed tractor type rated for 4475 KW (6000 SHP) at 1698 rpm. It was designated and built by Hamilton Standard under contract to the NASA Lewis Research Center. The LAP employs thin swept blades to provide efficient propulsion at flight speeds up to Mach .85. Testing was conducted in the ONERA S1-MA Atmospheric Wind Tunnel in Modane, France. The test objectives were to confirm that the LAP is free from high speed classical flutter, determine the structural and aerodynamic response to angular inflow, measure blade surface pressures (static and dynamic) and evaluate the aerodynamic performance at various blade angles, rotational speeds and Mach numbers. The measured structural and aerodynamic performance of the LAP correlated well with analytical predictions thereby providing confidence in the computer prediction codes used for the design. There were no signs of classical flutter throughout all phases of the test up to and including the 0.84 maximum Mach number achieved. Steady and unsteady blade surface pressures were successfully measured for a wide range of Mach numbers, inflow angles, rotational speeds and blade angles. No barriers were discovered that would prevent proceeding with the PTA (Prop-Fan Test Assessment) Flight Test Program scheduled for early 1987.

  8. Study of Potential Cost Reductions Resulting from Super-Large-Scale Manufacturing of PV Modules: Final Subcontract Report, 7 August 2003--30 September 2004

    SciTech Connect

    Keshner, M. S.; Arya, R.

    2004-10-01

    Hewlett Packard has created a design for a ''Solar City'' factory that will process 30 million sq. meters of glass panels per year and produce 2.1-3.6 GW of solar panels per year-100x the volume of a typical, thin-film, solar panel manufacturer in 2004. We have shown that with a reasonable selection of materials, and conservative assumptions, this ''Solar City'' can produce solar panels and hit the price target of $1.00 per peak watt (6.5x-8.5x lower than prices in 2004) as the total price for a complete and installed rooftop (or ground mounted) solar energy system. This breakthrough in the price of solar energy comes without the need for any significant new invention. It comes entirely from the manufacturing scale of a large plant and the cost savings inherent in operating at such a large manufacturing scale. We expect that further optimizations from these simple designs will lead to further improvements in cost. The manufacturing process and cost depend on the choice for the active layer that converts sunlight into electricity. The efficiency by which sunlight is converted into electricity can range from 7% to 15%. This parameter has a large effect on the overall price per watt. There are other impacts, as well, and we have attempted to capture them without creating undue distractions. Our primary purpose is to demonstrate the impact of large-scale manufacturing. This impact is largely independent of the choice of active layer. It is not our purpose to compare the pro's and con's for various types of active layers. Significant improvements in cost per watt can also come from scientific advances in active layers that lead to higher efficiency. But, again, our focus is on manufacturing gains and not on the potential advances in the basic technology.

  9. Advanced manufacturing: Technology diffusion

    SciTech Connect

    Tesar, A.

    1995-12-01

    In this paper we examine how manufacturing technology diffuses rom the developers of technology across national borders to those who do not have the capability or resources to develop advanced technology on their own. None of the wide variety of technology diffusion mechanisms discussed in this paper are new, yet the opportunities to apply these mechanisms are growing. A dramatic increase in technology diffusion occurred over the last decade. The two major trends which probably drive this increase are a worldwide inclination towards ``freer`` markets and diminishing isolation. Technology is most rapidly diffusing from the US In fact, the US is supplying technology for the rest of the world. The value of the technology supplied by the US more than doubled from 1985 to 1992 (see the Introduction for details). History shows us that technology diffusion is inevitable. It is the rates at which technologies diffuse to other countries which can vary considerably. Manufacturers in these countries are increasingly able to absorb technology. Their manufacturing efficiency is expected to progress as technology becomes increasingly available and utilized.

  10. Advancements in asphere manufacturing

    NASA Astrophysics Data System (ADS)

    Fess, Edward; DeFisher, Scott

    2013-09-01

    Aspheric optics can pose as a challenge to the manufacturing community due to the surface shape and level of quality required. The aspheric surface may have inflection points that limit the usable tool size during manufacturing, or there may be a stringent tolerance on the slope for mid-spatial frequencies that may be problematic for sub-aperture finishing techniques to achieve. As aspheres become more commonplace in the optics community, requests for more complex aspheres have risen. OptiPro Systems has been developing technologies to create a robust aspheric manufacturing process. Contour deterministic microgrinding is performed on a Pro80 or eSX platform. These platforms utilize software and the latest advancements in machine motion to accurately contour the aspheric shape. Then the optics are finished using UltraForm Finishing (UFF), which is a sub-aperture polishing process. This process has the capability to adjust the diameter and compliance of the polishing lap to allow for finishing over a wide range of shapes and conditions. Finally, the aspheric surfaces are qualified using an OptiTrace contact profilometer, or an UltraSurf non-contact 3D surface scanner. The OptiTrace uses a stylus to scan across the surface of the part, and the UltraSurf utilizes several different optical pens to scan the surface and generate a topographical map of the surface under test. This presentation will focus on the challenges for asphere manufacturing, how OptiPro has implemented its technologies to combat these challenges, and provide surface data for analysis.

  11. Recent advances on electromigration in very-large-scale-integration of interconnects

    NASA Astrophysics Data System (ADS)

    Tu, K. N.

    2003-11-01

    Today, the price of building a factory to produce submicron size electronic devices on 300 mm Si wafers is over billions of dollars. In processing a 300 mm Si wafer, over half of the production cost comes from fabricating the very-large-scale-integration of the interconnect metallization. The most serious and persistent reliability problem in interconnect metallization is electromigration. In the past 40 years, the microelectronic industry has used Al as the on-chip conductor. Due to miniaturization, however, a better conductor is needed in terms of resistance-capacitance delay, electromigration resistance, and cost of production. The industry has turned to Cu as the on-chip conductor, so the question of electromigration in Cu metallization must be examined. On the basis of what we have learned from the use of Al in devices, we review here what is current with respect to electromigration in Cu. In addition, the system of interconnects on an advanced device includes flip chip solder joints, which now tend to become weak links in the system due to, surprisingly, electromigration. In this review, we compare the electromigration in Al, Cu, and solder on the basis of the ratio of their melting point to the device operating temperature of 100 °C. Accordingly, grain boundary diffusion, surface diffusion, and lattice diffusion dominate, respectively, the electromigration in Al, Cu, and solder. In turn, the effects of microstructure, solute, and stress on electromigration in Al, Cu, and solder are different. The stress induced by electromigration in Cu/low-k interconnects will be a very serious issue since the low-k dielectric (with a value of k around 2) tends to be weak mechanically. In a multilevel interconnect, a electromigration force due to current crowding, acting normal to current flow, has been proposed to explain why many electromigration induced damages occur away from the high current density region. In mean-time-to-failure analysis, the time taken to nucleate

  12. Data processing 2: Advancements in large scale data processing systems for remote sensing

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A.

    1972-01-01

    The development of large scale data processing systems for remote sensing is studied by evaluating: (1) the suitability of several sensor types with regard to producing data required for multispectral machine analysis; (2) various types of data preprocessing necessary to prepare such data for analysis; and (3) transfer of machine processing techniques for earth resources data to user community.

  13. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    SciTech Connect

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  14. Advanced Manufacture of Reflectors

    SciTech Connect

    Angel, Roger

    2014-12-17

    The main project objective has been to develop an advanced gravity sag method for molding large glass solar reflectors with either line or point focus, and with long or short focal length. The method involves taking standard sized squares of glass, 1.65 m x 1.65 m, and shaping them by gravity sag into precision steel molds. The method is designed for high volume manufacture when incorporated into a production line with separate pre-heating and cooling. The performance objectives for the self-supporting glass mirrors made by this project include mirror optical accuracy of 2 mrad root mean square (RMS), requiring surface slope errors <1 mrad rms, a target not met by current production of solar reflectors. Our objective also included development of new methods for rapidly shaping glass mirrors and coating them for higher reflectivity and soil resistance. Reflectivity of 95% for a glass mirror with anti-soil coating was targeted, compared to the present ~94% with no anti-soil coating. Our mirror cost objective is ~$20/m2 in 2020, a significant reduction compared to the present ~$35/m2 for solar trough mirrors produced for trough solar plants. During the first year a custom batch furnace was built to develop the method with high power radiative heating to simulate transfer of glass into a hot slumping zone in a production line. To preserve the original high polish of the float glass on both front and back surfaces, as required for a second surface mirror, the mold surface is machined to the required shape as grooves which intersect the glass at cusps, reducing the mold contact area to significantly less than 1%. The mold surface is gold-plated to reflect thermal radiation. Optical metrology of glass replicas made with the system has been carried out with a novel, custom-built test system. This test provides collimated, vertically-oriented parallel beams from a linear array of co-aligned lasers translated in a perpendicular direction across the reflector. Deviations of

  15. Advanced Computing for Manufacturing.

    ERIC Educational Resources Information Center

    Erisman, Albert M.; Neves, Kenneth W.

    1987-01-01

    Discusses ways that supercomputers are being used in the manufacturing industry, including the design and production of airplanes and automobiles. Describes problems that need to be solved in the next few years for supercomputers to assume a major role in industry. (TW)

  16. An Online Scheduling Algorithm with Advance Reservation for Large-Scale Data Transfers

    SciTech Connect

    Balman, Mehmet; Kosar, Tevfik

    2010-05-20

    Scientific applications and experimental facilities generate massive data sets that need to be transferred to remote collaborating sites for sharing, processing, and long term storage. In order to support increasingly data-intensive science, next generation research networks have been deployed to provide high-speed on-demand data access between collaborating institutions. In this paper, we present a practical model for online data scheduling in which data movement operations are scheduled in advance for end-to-end high performance transfers. In our model, data scheduler interacts with reservation managers and data transfer nodes in order to reserve available bandwidth to guarantee completion of jobs that are accepted and confirmed to satisfy preferred time constraint given by the user. Our methodology improves current systems by allowing researchers and higher level meta-schedulers to use data placement as a service where theycan plan ahead and reserve the scheduler time in advance for their data movement operations. We have implemented our algorithm and examined possible techniques for incorporation into current reservation frameworks. Performance measurements confirm that the proposed algorithm is efficient and scalable.

  17. Unsteady blade surface pressures on a large-scale advanced propeller - Prediction and data

    NASA Technical Reports Server (NTRS)

    Nallasamy, M.; Groeneweg, J. F.

    1990-01-01

    An unsteady three dimensional Euler analysis technique is employed to compute the flowfield of an advanced propeller operating at an angle of attack. The predicted blade pressure waveforms are compared with wind tunnel data at two Mach numbers, 0.5 and 0.2. The inflow angle is three degrees. For an inflow Mach number of 0.5, the predicted pressure response is in fair agreement with data: the predicted phases of the waveforms are in close agreement with data while the magnitudes are underpredicted. At the low Mach number of 0.2 (take-off) the numerical solution shows the formation of a leading edge vortex which is in qualitative agreement with measurements. However, the highly nonlinear pressure response measured on the blade suction surface is not captured in the present inviscid analysis.

  18. Unsteady blade-surface pressures on a large-scale advanced propeller: Prediction and data

    NASA Technical Reports Server (NTRS)

    Nallasamy, M.; Groeneweg, J. F.

    1990-01-01

    An unsteady 3-D Euler analysis technique is employed to compute the flow field of an advanced propeller operating at an angle of attack. The predicted blade pressure waveforms are compared with wind tunnel data at two Mach numbers, 0.5 and 0.2. The inflow angle is three degrees. For an inflow Mach number of 0.5, the predicted pressure response is in fair agreement with data: the predicted phases of the waveforms are in close agreement with data while the magnitudes are underpredicted. At the low Mach number of 0.2 (takeoff), the numerical solution shows the formation of a leading edge vortex which is in qualitative agreement with measurements. However, the highly nonlinear pressure response measured on the blade suction surface is not captured in the present inviscid analysis.

  19. Repeated large-scale retreat and advance of Totten Glacier indicated by inland bed erosion

    NASA Astrophysics Data System (ADS)

    Aitken, A. R. A.; Roberts, J. L.; Ommen, T. D. Van; Young, D. A.; Golledge, N. R.; Greenbaum, J. S.; Blankenship, D. D.; Siegert, M. J.

    2016-05-01

    Climate variations cause ice sheets to retreat and advance, raising or lowering sea level by metres to decametres. The basic relationship is unambiguous, but the timing, magnitude and sources of sea-level change remain unclear; in particular, the contribution of the East Antarctic Ice Sheet (EAIS) is ill defined, restricting our appreciation of potential future change. Several lines of evidence suggest possible collapse of the Totten Glacier into interior basins during past warm periods, most notably the Pliocene epoch, causing several metres of sea-level rise. However, the structure and long-term evolution of the ice sheet in this region have been understood insufficiently to constrain past ice-sheet extents. Here we show that deep ice-sheet erosion—enough to expose basement rocks—has occurred in two regions: the head of the Totten Glacier, within 150 kilometres of today’s grounding line; and deep within the Sabrina Subglacial Basin, 350–550 kilometres from this grounding line. Our results, based on ICECAP aerogeophysical data, demarcate the marginal zones of two distinct quasi-stable EAIS configurations, corresponding to the ‘modern-scale’ ice sheet (with a marginal zone near the present ice-sheet margin) and the retreated ice sheet (with the marginal zone located far inland). The transitional region of 200–250 kilometres in width is less eroded, suggesting shorter-lived exposure to eroding conditions during repeated retreat–advance events, which are probably driven by ocean-forced instabilities. Representative ice-sheet models indicate that the global sea-level increase resulting from retreat in this sector can be up to 0.9 metres in the modern-scale configuration, and exceeds 2 metres in the retreated configuration.

  20. Repeated large-scale retreat and advance of Totten Glacier indicated by inland bed erosion.

    PubMed

    Aitken, A R A; Roberts, J L; van Ommen, T D; Young, D A; Golledge, N R; Greenbaum, J S; Blankenship, D D; Siegert, M J

    2016-05-19

    Climate variations cause ice sheets to retreat and advance, raising or lowering sea level by metres to decametres. The basic relationship is unambiguous, but the timing, magnitude and sources of sea-level change remain unclear; in particular, the contribution of the East Antarctic Ice Sheet (EAIS) is ill defined, restricting our appreciation of potential future change. Several lines of evidence suggest possible collapse of the Totten Glacier into interior basins during past warm periods, most notably the Pliocene epoch, causing several metres of sea-level rise. However, the structure and long-term evolution of the ice sheet in this region have been understood insufficiently to constrain past ice-sheet extents. Here we show that deep ice-sheet erosion-enough to expose basement rocks-has occurred in two regions: the head of the Totten Glacier, within 150 kilometres of today's grounding line; and deep within the Sabrina Subglacial Basin, 350-550 kilometres from this grounding line. Our results, based on ICECAP aerogeophysical data, demarcate the marginal zones of two distinct quasi-stable EAIS configurations, corresponding to the 'modern-scale' ice sheet (with a marginal zone near the present ice-sheet margin) and the retreated ice sheet (with the marginal zone located far inland). The transitional region of 200-250 kilometres in width is less eroded, suggesting shorter-lived exposure to eroding conditions during repeated retreat-advance events, which are probably driven by ocean-forced instabilities. Representative ice-sheet models indicate that the global sea-level increase resulting from retreat in this sector can be up to 0.9 metres in the modern-scale configuration, and exceeds 2 metres in the retreated configuration. PMID:27193684

  1. Advancing effects analysis for integrated, large-scale wildfire risk assessment.

    PubMed

    Thompson, Matthew P; Calkin, David E; Gilbertson-Day, Julie W; Ager, Alan A

    2011-08-01

    In this article, we describe the design and development of a quantitative, geospatial risk assessment tool intended to facilitate monitoring trends in wildfire risk over time and to provide information useful in prioritizing fuels treatments and mitigation measures. The research effort is designed to develop, from a strategic view, a first approximation of how both fire likelihood and intensity influence risk to social, economic, and ecological values at regional and national scales. Three main components are required to generate wildfire risk outputs: (1) burn probability maps generated from wildfire simulations, (2) spatially identified highly valued resources (HVRs), and (3) response functions that describe the effects of fire (beneficial or detrimental) on the HVR. Analyzing fire effects has to date presented a major challenge to integrated risk assessments, due to a limited understanding of the type and magnitude of changes wrought by wildfire to ecological and other nonmarket values. This work advances wildfire effects analysis, recognizing knowledge uncertainty and appropriately managing it through the use of an expert systems approach. Specifically, this work entailed consultation with 10 fire and fuels program management officials from federal agencies with fire management responsibilities in order to define quantitative resource response relationships as a function of fire intensity. Here, we demonstrate a proof-of-concept application of the wildland fire risk assessment tool, using the state of Oregon as a case study. PMID:20981570

  2. Generation of large scale urban environments to support advanced sensor and seeker simulation

    NASA Astrophysics Data System (ADS)

    Giuliani, Joseph; Hershey, Daniel; McKeown, David, Jr.; Willis, Carla; Van, Tan

    2009-05-01

    One of the key aspects for the design of a next generation weapon system is the need to operate in cluttered and complex urban environments. Simulation systems rely on accurate representation of these environments and require automated software tools to construct the underlying 3D geometry and associated spectral and material properties that are then formatted for various objective seeker simulation systems. Under an Air Force Small Business Innovative Research (SBIR) contract, we have developed an automated process to generate 3D urban environments with user defined properties. These environments can be composed from a wide variety of source materials, including vector source data, pre-existing 3D models, and digital elevation models, and rapidly organized into a geo-specific visual simulation database. This intermediate representation can be easily inspected in the visible spectrum for content and organization and interactively queried for accuracy. Once the database contains the required contents, it can then be exported into specific synthetic scene generation runtime formats, preserving the relationship between geometry and material properties. To date an exporter for the Irma simulation system developed and maintained by AFRL/Eglin has been created and a second exporter to Real Time Composite Hardbody and Missile Plume (CHAMP) simulation system for real-time use is currently being developed. This process supports significantly more complex target environments than previous approaches to database generation. In this paper we describe the capabilities for content creation for advanced seeker processing algorithms simulation and sensor stimulation, including the overall database compilation process and sample databases produced and exported for the Irma runtime system. We also discuss the addition of object dynamics and viewer dynamics within the visual simulation into the Irma runtime environment.

  3. Physical control oriented model of large scale refrigerators to synthesize advanced control schemes. Design, validation, and first control results

    NASA Astrophysics Data System (ADS)

    Bonne, François; Alamir, Mazen; Bonnay, Patrick

    2014-01-01

    In this paper, a physical method to obtain control-oriented dynamical models of large scale cryogenic refrigerators is proposed, in order to synthesize model-based advanced control schemes. These schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in the cryogenic cooling systems of future fusion reactors such as the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT-60SA). Advanced control schemes lead to a better perturbation immunity and rejection, to offer a safer utilization of cryoplants. The paper gives details on how basic components used in the field of large scale helium refrigeration (especially those present on the 400W @1.8K helium test facility at CEA-Grenoble) are modeled and assembled to obtain the complete dynamic description of controllable subsystems of the refrigerator (controllable subsystems are namely the Joule-Thompson Cycle, the Brayton Cycle, the Liquid Nitrogen Precooling Unit and the Warm Compression Station). The complete 400W @1.8K (in the 400W @4.4K configuration) helium test facility model is then validated against experimental data and the optimal control of both the Joule-Thompson valve and the turbine valve is proposed, to stabilize the plant under highly variable thermals loads. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  4. Physical control oriented model of large scale refrigerators to synthesize advanced control schemes. Design, validation, and first control results

    SciTech Connect

    Bonne, François; Bonnay, Patrick

    2014-01-29

    In this paper, a physical method to obtain control-oriented dynamical models of large scale cryogenic refrigerators is proposed, in order to synthesize model-based advanced control schemes. These schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in the cryogenic cooling systems of future fusion reactors such as the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT-60SA). Advanced control schemes lead to a better perturbation immunity and rejection, to offer a safer utilization of cryoplants. The paper gives details on how basic components used in the field of large scale helium refrigeration (especially those present on the 400W @1.8K helium test facility at CEA-Grenoble) are modeled and assembled to obtain the complete dynamic description of controllable subsystems of the refrigerator (controllable subsystems are namely the Joule-Thompson Cycle, the Brayton Cycle, the Liquid Nitrogen Precooling Unit and the Warm Compression Station). The complete 400W @1.8K (in the 400W @4.4K configuration) helium test facility model is then validated against experimental data and the optimal control of both the Joule-Thompson valve and the turbine valve is proposed, to stabilize the plant under highly variable thermals loads. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  5. Parallel supercomputing: Advanced methods, algorithms and software for large-scale problems. Final report, August 1, 1987--July 31, 1994

    SciTech Connect

    Carey, G.F.; Young, D.M.

    1994-12-31

    The focus of the subject DOE sponsored research concerns parallel methods, algorithms, and software for complex applications such as those in coupled fluid flow and heat transfer. The research has been directed principally toward the solution of large-scale PDE problems using iterative solvers for finite differences and finite elements on advanced computer architectures. This work embraces parallel domain decomposition, element-by-element, spectral, and multilevel schemes with adaptive parameter determination, rational iteration and related issues. In addition to the fundamental questions related to developing new methods and mapping these to parallel computers, there are important software issues. The group has played a significant role in the development of software both for iterative solvers and also for finite element codes. The research in computational fluid dynamics (CFD) led to sustained multi-Gigaflop performance rates for parallel-vector computations of realistic large scale applications (not computational kernels alone). The main application areas for these performance studies have been two-dimensional problems in CFD. Over the course of this DOE sponsored research significant progress has been made. A report of the progression of the research is given and at the end of the report is a list of related publications and presentations over the entire grant period.

  6. Large-scale retreat and advance of shallow seas in Southeast Asia driven by mantle flow

    NASA Astrophysics Data System (ADS)

    Zahirovic, Sabin; Flament, Nicolas; Dietmar Müller, R.; Seton, Maria; Gurnis, Michael

    2016-04-01

    The Indonesian islands and surrounding region represent one of the most submerged, low-lying continental areas on Earth. Almost half of this region, known as Sundaland, is presently inundated by a shallow sea. The role of mantle convection in driving long-wavelength topography and vertical motion of the lithosphere in this region has largely been ignored when interpreting regional stratigraphic sections, despite a consensus that Southeast Asia presently situated on a "dynamic topography low" resulting from long-term post-Pangea subduction. However, dynamic topography is typically described as a temporally and spatially transient process, implying that Sundaland may have experienced significant vertical motions in the geological past, and thus must be considered when interpreting relative sea level changes and the paleogeographic indicators of advancing and retreating shallow seas. Although the present-day low regional elevation has been attributed to the massive volume of oceanic slabs sinking in the mantle beneath Southeast Asia, a Late Cretaceous to Eocene regional unconformity indicates that shallow seas retreated following regional flooding during the mid-Cretaceous sea level highstand. During the Eocene, less than one fifth of Sundaland was submerged, despite global sea level being ~200 m higher than at present. The regional nature of the switch from marine to terrestrial environments, that is out-of-sync with eustatic sea levels, suggests that broad mantle-driven dynamic uplift may have led to the emergence of Sundaland in the Late Cretaceous and Paleocene. We use numerical forward modelling of plate tectonics and mantle convection, and compare the predicted trends of dynamic topography with evidence from regional paleogeography and eustasy to determine the extent to which mantle-driven vertical motions of the lithosphere have influenced regional basin histories in Southeast Asia. A Late Cretaceous collision of Gondwana-derived terranes with Sundaland choked

  7. Large-scale Manufacturing of Nanoparticulate-based Lubrication Additives for Improved Energy Efficiency and Reduced Emissions

    SciTech Connect

    Erdemir, Ali

    2013-09-26

    emissions was also a major reason. The transportation sector alone consumes about 13 million barrels of crude oil per day (nearly 60% of which is imported) and is responsible for about 30% of the CO{sub 2} emission. When we consider manufacturing and other energy-intensive industrial processes, the amount of petroleum being consumed due to friction and wear reaches more than 20 million barrels per day (from official energy statistics, U.S. Energy Information Administration). Frequent remanufacturing and/or replacement of worn parts due to friction-, wear-, and scuffing-related degradations also consume significant amounts of energy and give rise to additional CO{sub 2} emission. Overall, the total annual cost of friction- and wear-related energy and material losses is estimated to be rather significant (i.e., as much as 5% of the gross national products of highly industrialized nations). It is projected that more than half of the total friction- and wear-related energy losses can be recovered by developing and implementing advanced friction and wear control technologies. In transportation vehicles alone, 10% to 15% of the fuel energy is spent to overcome friction. If we can cut down the friction- and wear-related energy losses by half, then we can potentially save up to 1.5 million barrels of petroleum per day. Also, less friction and wear would mean less energy consumption as well as less carbon emissions and hazardous byproducts being generated and released to the environment. New and more robust anti-friction and -wear control technologies may thus have a significant positive impact on improving the efficiency and environmental cleanliness of the current legacy fleet and future transportation systems. Effective control of friction in other industrial sectors such as manufacturing, power generation, mining and oil exploration, and agricultural and earthmoving machinery may bring more energy savings. Therefore, this project was timely and responsive to the energy and

  8. Ohio Advanced Energy Manufacturing Center

    SciTech Connect

    Kimberly Gibson; Mark Norfolk

    2012-07-30

    The program goal of the Ohio Advanced Energy Manufacturing Center (OAEMC) is to support advanced energy manufacturing and to create responsive manufacturing clusters that will support the production of advanced energy and energy-efficient products to help ensure the nation's energy and environmental security. This goal cuts across a number of existing industry segments critical to the nation's future. Many of the advanced energy businesses are starting to make the transition from technology development to commercial production. Historically, this transition from laboratory prototypes through initial production for early adopters to full production for mass markets has taken several years. Developing and implementing manufacturing technology to enable production at a price point the market will accept is a key step. Since these start-up operations are configured to advance the technology readiness of the core energy technology, they have neither the expertise nor the resources to address manufacturing readiness issues they encounter as the technology advances toward market entry. Given the economic realities of today's business environment, finding ways to accelerate this transition can make the difference between success and failure for a new product or business. The advanced energy industry touches a wide range of industry segments that are not accustomed to working together in complex supply chains to serve large markets such as automotive and construction. During its first three years, the Center has catalyzed the communication between companies and industry groups that serve the wide range of advanced energy markets. The Center has also found areas of common concern, and worked to help companies address these concerns on a segment or industry basis rather than having each company work to solve common problems individually. EWI worked with three industries through public-private partnerships to sew together disparate segments helping to promote overall industry

  9. Advanced optical manufacturing digital integrated system

    NASA Astrophysics Data System (ADS)

    Tao, Yizheng; Li, Xinglan; Li, Wei; Tang, Dingyong

    2012-10-01

    It is necessarily to adapt development of advanced optical manufacturing technology with modern science technology development. To solved these problems which low of ration, ratio of finished product, repetition, consistent in big size and high precision in advanced optical component manufacturing. Applied business driven and method of Rational Unified Process, this paper has researched advanced optical manufacturing process flow, requirement of Advanced Optical Manufacturing integrated System, and put forward architecture and key technology of it. Designed Optical component core and Manufacturing process driven of Advanced Optical Manufacturing Digital Integrated System. the result displayed effective well, realized dynamic planning Manufacturing process, information integration improved ratio of production manufactory.

  10. Advances in a framework to compare bio-dosimetry methods for triage in large-scale radiation events

    PubMed Central

    Flood, Ann Barry; Boyle, Holly K.; Du, Gaixin; Demidenko, Eugene; Nicolalde, Roberto J.; Williams, Benjamin B.; Swartz, Harold M.

    2014-01-01

    Planning and preparation for a large-scale nuclear event would be advanced by assessing the applicability of potentially available bio-dosimetry methods. Using an updated comparative framework the performance of six bio-dosimetry methods was compared for five different population sizes (100–1 000 000) and two rates for initiating processing of the marker (15 or 15 000 people per hour) with four additional time windows. These updated factors are extrinsic to the bio-dosimetry methods themselves but have direct effects on each method's ability to begin processing individuals and the size of the population that can be accommodated. The results indicate that increased population size, along with severely compromised infrastructure, increases the time needed to triage, which decreases the usefulness of many time intensive dosimetry methods. This framework and model for evaluating bio-dosimetry provides important information for policy-makers and response planners to facilitate evaluation of each method and should advance coordination of these methods into effective triage plans. PMID:24729594

  11. Design and analysis of drum lathe for manufacturing large-scale optical microstructured surface and load characteristics of aerostatic spindle

    NASA Astrophysics Data System (ADS)

    Wu, Dongxu; Qiao, Zheng; Wang, Bo; Wang, Huiming; Li, Guo

    2014-08-01

    In this paper, a four-axis ultra-precision lathe for machining large-scale drum mould with microstructured surface is presented. Firstly, because of the large dimension and weight of drum workpiece, as well as high requirement of machining accuracy, the design guidelines and component parts of this drum lathe is introduced in detail, including control system, moving and driving components, position feedback system and so on. Additionally, the weight of drum workpiece would result in the structural deformation of this lathe, therefore, this paper analyses the effect of structural deformation on machining accuracy by means of ANSYS. The position change is approximately 16.9nm in the X-direction(sensitive direction) which could be negligible. Finally, in order to study the impact of bearing parameters on the load characteristics of aerostatic journal bearing, one of the famous computational fluid dynamics(CFD) software, FLUENT, is adopted, and a series of simulations are carried out. The result shows that the aerostatic spindle has superior performance of carrying capacity and stiffness, it is possible for this lathe to bear the weight of drum workpiece up to 1000kg since there are two aerostatic spindles in the headstock and tailstock.

  12. New techniques in large scale metrology toolset data mining to accelerate integrated chip technology development and increase manufacturing efficiencies

    NASA Astrophysics Data System (ADS)

    Solecky, Eric; Rana, Narender; Minns, Allan; Gustafson, Carol; Lindo, Patrick; Cornell, Roger; Llanos, Paul

    2014-04-01

    Today, metrology toolsets report out more information than ever. This information applies not only to process performance but also metrology toolset and recipe performance through various diagnostic metrics. This is most evident on the Critical Dimension Scanning Electron Microscope (CD-SEM). Today state of the art CD-SEMs report out over 250 individual data points and several images per measurement. It is typical for a state of the art fab with numerous part numbers to generate at least 20TB of information over the course of a year on the CD-SEM fleet alone pushing metrology toolsets into the big data regime. Most of this comes from improvements in throughput, increased sampling and new data outputs relative to previous generations of tools. Oftentimes, these new data outputs are useful for helping to determine if the process, metrology recipe or tool is deviating from an ideal state. Many issues could be missed by singularly looking at the key process control metric like the bottom critical dimension (CD) or a small subset of this available information. By leveraging the entire data set the mean time to detect and finding the root cause of issues can be significantly reduced. In this paper a new data mining system is presented that achieves this goal. Examples are shown with a focus on the benefits realized using this new system which helps speed up development cycles of learning and reducing manufacturing cycle-time. This paper concludes discussing future directions to make this capability more effective.

  13. Advanced Technology Composite Fuselage - Manufacturing

    NASA Technical Reports Server (NTRS)

    Wilden, K. S.; Harris, C. G.; Flynn, B. W.; Gessel, M. G.; Scholz, D. B.; Stawski, S.; Winston, V.

    1997-01-01

    The goal of Boeing's Advanced Technology Composite Aircraft Structures (ATCAS) program is to develop the technology required for cost-and weight-efficient use of composite materials in transport fuselage structure. Carbon fiber reinforced epoxy was chosen for fuselage skins and stiffening elements, and for passenger and cargo floor structures. The automated fiber placement (AFP) process was selected for fabrication of stringer-stiffened and sandwich skin panels. Circumferential and window frames were braided and resin transfer molded (RTM'd). Pultrusion was selected for fabrication of floor beams and constant-section stiffening elements. Drape forming was chosen for stringers and other stiffening elements cocured to skin structures. Significant process development efforts included AFP, braiding, RTM, autoclave cure, and core blanket fabrication for both sandwich and stiffened-skin structure. Outer-mold-line and inner-mold-line tooling was developed for sandwich structures and stiffened-skin structure. The effect of design details, process control and tool design on repeatable, dimensionally stable, structure for low cost barrel assembly was assessed. Subcomponent panels representative of crown, keel, and side quadrant panels were fabricated to assess scale-up effects and manufacturing anomalies for full-scale structures. Manufacturing database including time studies, part quality, and manufacturing plans were generated to support the development of designs and analytical models to access cost, structural performance, and dimensional tolerance.

  14. Research on advanced photovoltaic manufacturing technology

    SciTech Connect

    Jester, T.; Eberspacher, C. )

    1991-11-01

    This report outlines opportunities for significantly advancing the scale and economy of high-volume manufacturing of high-efficiency photovoltaic (PV) modules. We propose to pursue a concurrent effort to advance existing crystalline silicon module manufacturing technology and to implement thin film CuInSe{sub 2} (CIS) module manufacturing. This combination of commercial-scale manufacturing of high-efficiency crystalline silicon modules and of pilot-scale manufacturing of low-cost thin film CIS technology will support continued, rapid growth of the US PV industry.

  15. Manufacturing development of DC-10 advanced rudder

    NASA Technical Reports Server (NTRS)

    Cominsky, A.

    1979-01-01

    The design, manufacture, and ground test activities during development of production methods for an advanced composite rudder for the DC-10 transport aircraft are described. The advanced composite aft rudder is satisfactory for airline service and a cost saving in a full production manufacturing mode is anticipated.

  16. Advancing a distributed multi-scale computing framework for large-scale high-throughput discovery in materials science

    NASA Astrophysics Data System (ADS)

    Knap, J.; Spear, C. E.; Borodin, O.; Leiter, K. W.

    2015-10-01

    We describe the development of a large-scale high-throughput application for discovery in materials science. Our point of departure is a computational framework for distributed multi-scale computation. We augment the original framework with a specialized module whose role is to route evaluation requests needed by the high-throughput application to a collection of available computational resources. We evaluate the feasibility and performance of the resulting high-throughput computational framework by carrying out a high-throughput study of battery solvents. Our results indicate that distributed multi-scale computing, by virtue of its adaptive nature, is particularly well-suited for building high-throughput applications.

  17. Performance of powder-filled evacuated panel insulation in a manufactured home roof cavity: Tests in the Large Scale Climate Simulator

    SciTech Connect

    Petrie, T.W.; Kosny, J.; Childs, P.W.

    1996-03-01

    A full-scale section of half the top of a single-wide manufactured home has been studied in the Large Scale Climate Simulator (LSCS) at the Oak Ridge National Laboratory. A small roof cavity with little room for insulation at the eaves is often the case with single-wide units and limits practical ways to improve thermal performance. The purpose of the current tests was to obtain steady-state performance data for the roof cavity of the manufactured home test section when the roof cavity was insulated with fiberglass batts, blown-in rock wool insulation or combinations of these insulations and powder-filled evacuated panel (PEP) insulation. Four insulation configurations were tested: (A) a configuration with two layers of nominal R{sub US}-7 h {center_dot} ft{sup 2} {center_dot} F/BTU (R{sub SI}-1.2 m{sup 2} {center_dot} K/W) fiberglass batts; (B) a layer of PEPs and one layer of the fiberglass batts; (C) four layers of the fiberglass batts; and (D) an average 4.1 in. (10.4 cm) thick layer of blown-in rock wool at an average density of 2.4 lb/ft{sup 3} (38 kg/m{sup 3}). Effects of additional sheathing were determined for Configurations B and C. With Configuration D over the ceiling, two layers of expanded polystyrene (EPS) boards, each about the same thickness as the PEPs, were installed over the trusses instead of the roof. Aluminum foils facing the attic and over the top layer of EPS were added. The top layer of EPS was then replaced by PEPs.

  18. Final Report on DOE Project entitled Dynamic Optimized Advanced Scheduling of Bandwidth Demands for Large-Scale Science Applications

    SciTech Connect

    Ramamurthy, Byravamurthy

    2014-05-05

    In this project, developed scheduling frameworks for dynamic bandwidth demands for large-scale science applications. In particular, we developed scheduling algorithms for dynamic bandwidth demands in this project. Apart from theoretical approaches such as Integer Linear Programming, Tabu Search and Genetic Algorithm heuristics, we have utilized practical data from ESnet OSCARS project (from our DOE lab partners) to conduct realistic simulations of our approaches. We have disseminated our work through conference paper presentations and journal papers and a book chapter. In this project we addressed the problem of scheduling of lightpaths over optical wavelength division multiplexed (WDM) networks. We published several conference papers and journal papers on this topic. We also addressed the problems of joint allocation of computing, storage and networking resources in Grid/Cloud networks and proposed energy-efficient mechanisms for operatin optical WDM networks.

  19. EFG Technology and Diagnostic R&D for Large-Scale PV Manufacturing; Final Subcontract Report, 1 March 2002 - 31 March 2005

    SciTech Connect

    Kalejs, J.; Aurora, P.; Bathey, B.; Cao, J.; Doedderlein, J.; Gonsiorawski, R.; Heath, B.; Kubasti, J.; Mackintosh, B.; Ouellette, M.; Rosenblum, M.; Southimath, S.; Xavier, G.

    2005-10-01

    The objective of this subcontract was to carry out R&D to advance the technology, processes, and performance of RWE Schott-Solar's wafer, cell, and module manufacturing lines, and help configure these lines for scaling up of edge-defined, film-fed growth (EFG) ribbon technology to the 50-100 MW PV factory level. EFG ribbon manufacturing continued to expand during this subcontract period and now has reached a capacity of 40 MW. EFG wafer products were diversified over this time period. In addition to 10 cm x 10 cm and 10 cm x 15 cm wafer areas, which were the standard products at the beginning of this program, R&D has focused on new EFG technology to extend production to 12.5 cm x 12.5 cm EFG wafers. Cell and module production also has continued to expand in Billerica. A new 12-MW cell line was installed and brought on line in 2003. R&D on this subcontract improved cell yield and throughput, and optimized the cell performance, with special emphasis on work to speed up wafer transfer, hence enhancing throughput. Improvements of wafer transfer processes during this program have raised cell line capacity from 12 MW to over 18 MW. Optimization of module manufacturing processes was carried out on new equipment installed during a manufacturing upgrade in Billerica to a 12-MW capacity to improve yield and reliability of products.

  20. Isotope separation and advanced manufacturing technology

    NASA Astrophysics Data System (ADS)

    Carpenter, J.; Kan, T.

    This is the fourth issue of a semiannual report for the Isotope Separation and Advanced Materials Manufacturing (ISAM) Technology Program at Lawrence Livermore National Laboratory. Primary objectives include: (1) the Uranium Atomic Vapor Laser Isotope Separation (UAVLIS) process, which is being developed and prepared for deployment as an advanced uranium enrichment capability; (2) Advanced manufacturing technologies, which include industrial laser and E-beam material processing and new manufacturing technologies for uranium, plutonium, and other strategically important materials in support of DOE and other national applications. This report features progress in the ISAM Program from October 1993 through March 1994.

  1. Advanced manufacturing: Technology and international competitiveness

    SciTech Connect

    Tesar, A.

    1995-02-01

    Dramatic changes in the competitiveness of German and Japanese manufacturing have been most evident since 1988. All three countries are now facing similar challenges, and these challenges are clearly observed in human capital issues. Our comparison of human capital issues in German, Japanese, and US manufacturing leads us to the following key judgments: Manufacturing workforces are undergoing significant changes due to advanced manufacturing technologies. As companies are forced to develop and apply these technologies, the constituency of the manufacturing workforce (especially educational requirements, contingent labor, job content, and continuing knowledge development) is being dramatically and irreversibly altered. The new workforce requirements which result due to advanced manufacturing require a higher level of worker sophistication and responsibility.

  2. Recent advances in large-scale assembly of semiconducting inorganic nanowires and nanofibers for electronics, sensors and photovoltaics.

    PubMed

    Long, Yun-Ze; Yu, Miao; Sun, Bin; Gu, Chang-Zhi; Fan, Zhiyong

    2012-06-21

    Semiconducting inorganic nanowires (NWs), nanotubes and nanofibers have been extensively explored in recent years as potential building blocks for nanoscale electronics, optoelectronics, chemical/biological/optical sensing, and energy harvesting, storage and conversion, etc. Besides the top-down approaches such as conventional lithography technologies, nanowires are commonly grown by the bottom-up approaches such as solution growth, template-guided synthesis, and vapor-liquid-solid process at a relatively low cost. Superior performance has been demonstrated using nanowires devices. However, most of the nanowire devices are limited to the demonstration of single devices, an initial step toward nanoelectronic circuits, not adequate for production on a large scale at low cost. Controlled and uniform assembly of nanowires with high scalability is still one of the major bottleneck challenges towards the materials and device integration for electronics. In this review, we aim to present recent progress toward nanowire device assembly technologies, including flow-assisted alignment, Langmuir-Blodgett assembly, bubble-blown technique, electric/magnetic- field-directed assembly, contact/roll printing, planar growth, bridging method, and electrospinning, etc. And their applications in high-performance, flexible electronics, sensors, photovoltaics, bioelectronic interfaces and nano-resonators are also presented. PMID:22573265

  3. NASA's National Center for Advanced Manufacturing

    NASA Technical Reports Server (NTRS)

    Vickers, John H.; Frazier, Michael K.; Munafo, Paul M. (Technical Monitor)

    2002-01-01

    An investment in the future, NASA has designated a new initiative where government, industry, and academia collaborate to meet the manufacturing needs of future space systems. The Marshall Space Flight Center in Huntsville, Alabama has the principal responsibility for implementation of the National Center for Advanced Manufacturing (NCAM). The mission of the NCAM is to build partnerships that will jointly conduct program planning and develop strategies to perform manufacturing research and technology development for critical national missions.

  4. The Advanced Manufacturing Laboratory at RPI.

    ERIC Educational Resources Information Center

    Desrochers, A.; DeRusso, P. M.

    1984-01-01

    An Advanced Manufacturing Laboratory (AML) has been established at Rensselaer Polytechnic Institute (RPI). AML courses, course objectives, instructional strategies, student experiences in design and manufacturing, and AML equipment are discussed. Overall recommendations based on student and instructor experiences are also presented. (JN)

  5. Advanced Manufacturing Training: Mobile Learning Labs

    ERIC Educational Resources Information Center

    Vukich, John C.; Ackerman, Amanda A.

    2010-01-01

    Across Colorado, manufacturing employers forecast an on-going need not only for workers who are interested in career opportunities but who are prepared to enter the advanced manufacturing industry with the necessary high-tech skills. Additionally, employers report concerns about replacing retiring workers that take with them decades of…

  6. Advancing manufacturing through computational chemistry

    SciTech Connect

    Noid, D.W.; Sumpter, B.G.; Tuzun, R.E.

    1995-12-31

    The capabilities of nanotechnology and computational chemistry are reaching a point of convergence. New computer hardware and novel computational methods have created opportunities to test proposed nanometer-scale devices, investigate molecular manufacturing and model and predict properties of new materials. Experimental methods are also beginning to provide new capabilities that make the possibility of manufacturing various devices with atomic precision tangible. In this paper, we will discuss some of the novel computational methods we have used in molecular dynamics simulations of polymer processes, neural network predictions of new materials, and simulations of proposed nano-bearings and fluid dynamics in nano- sized devices.

  7. Advancing Manufacturing Research Through Competitions

    SciTech Connect

    Balakirsky, Stephen; Madhavan, Raj

    2009-01-01

    Competitions provide a technique for building interest and collaboration in targeted research areas. This paper will present a new competition that aims to increase collaboration amongst Universities, automation end-users, and automation manufacturers through a virtual competition. The virtual nature of the competition allows for reduced infrastructure requirements while maintaining realism in both the robotic equipment deployed and the scenarios. Details of the virtual environment as well as the competitions objectives, rules, and scoring metrics will be presented.

  8. Integration of Technology, Curriculum, and Professional Development for Advancing Middle School Mathematics: Three Large-Scale Studies

    ERIC Educational Resources Information Center

    Roschelle, Jeremy; Shechtman, Nicole; Tatar, Deborah; Hegedus, Stephen; Hopkins, Bill; Empson, Susan; Knudsen, Jennifer; Gallagher, Lawrence P.

    2010-01-01

    The authors present three studies (two randomized controlled experiments and one embedded quasi-experiment) designed to evaluate the impact of replacement units targeting student learning of advanced middle school mathematics. The studies evaluated the SimCalc approach, which integrates an interactive representational technology, paper curriculum,…

  9. Observations on computational methodologies for use in large-scale, gradient-based, multidisciplinary design incorporating advanced CFD codes

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Hou, G. J.-W.; Jones, H. E.; Taylor, A. C., III; Korivi, V. M.

    1992-01-01

    How a combination of various computational methodologies could reduce the enormous computational costs envisioned in using advanced CFD codes in gradient based optimized multidisciplinary design (MdD) procedures is briefly outlined. Implications of these MdD requirements upon advanced CFD codes are somewhat different than those imposed by a single discipline design. A means for satisfying these MdD requirements for gradient information is presented which appear to permit: (1) some leeway in the CFD solution algorithms which can be used; (2) an extension to 3-D problems; and (3) straightforward use of other computational methodologies. Many of these observations have previously been discussed as possibilities for doing parts of the problem more efficiently; the contribution here is observing how they fit together in a mutually beneficial way.

  10. Large-Scale PV Module Manufacturing Using Ultra-Thin Polycrystalline Silicon Solar Cells: Final Subcontract Report, 1 April 2002--28 February 2006

    SciTech Connect

    Wohlgemuth, J.; Narayanan, M.

    2006-07-01

    The major objectives of this program were to continue advances of BP Solar polycrystalline silicon manufacturing technology. The Program included work in the following areas. (1) Efforts in the casting area to increase ingot size, improve ingot material quality, and improve handling of silicon feedstock as it is loaded into the casting stations. (2) Developing wire saws to slice 100-..mu..m-thick silicon wafers on 290-..mu..m-centers. (3) Developing equipment for demounting and subsequent handling of very thin silicon wafers. (4) Developing cell processes using 100-..mu..m-thick silicon wafers that produce encapsulated cells with efficiencies of at least 15.4% at an overall yield exceeding 95%. (5) Expanding existing in-line manufacturing data reporting systems to provide active process control. (6) Establishing a 50-MW (annual nominal capacity) green-field Mega-plant factory model template based on this new thin polycrystalline silicon technology. (7) Facilitating an increase in the silicon feedstock industry's production capacity for lower-cost solar-grade silicon feedstock..

  11. USCAR LEP ESST Advanced Manufacturing

    SciTech Connect

    Lazarus, L.J.

    2000-09-25

    The objective of this task was to provide processing information data summaries on powder metallurgy (PM) alloys that meet the partner requirements for the production of low mass, highly accurate, near-net-shape powertrain components. This required modification to existing ISO machinability test procedures and development of a new drilling test procedure. These summaries could then be presented in a web page format. When combined with information generated from the USCAR CRADA this would allow chemical, metallurgical, and machining data on PM alloys to be available to all engineering and manufacturing personnel that have access to in-house networks. The web page format also allows for the additions of other wrought materials, making this a valuable tool to the technical staffs.

  12. Advanced Manufacturing of Superconducting Magnets

    NASA Technical Reports Server (NTRS)

    Senti, Mark W.

    1996-01-01

    The development of specialized materials, processes, and robotics technology allows for the rapid prototype and manufacture of superconducting and normal magnets which can be used for magnetic suspension applications. Presented are highlights of the Direct Conductor Placement System (DCPS) which enables automatic design and assembly of 3-dimensional coils and conductor patterns using LTS and HTS conductors. The system enables engineers to place conductors in complex patterns with greater efficiency and accuracy, and without the need for hard tooling. It may also allow researchers to create new types of coils and patterns which were never practical before the development of DCPS. The DCPS includes a custom designed eight-axis robot, patented end effector, CoilCAD(trademark) design software, RoboWire(trademark) control software, and automatic inspection.

  13. ‘Oorja’ in India: Assessing a large-scale commercial distribution of advanced biomass stoves to households

    PubMed Central

    Thurber, Mark C.; Phadke, Himani; Nagavarapu, Sriniketh; Shrimali, Gireesh; Zerriffi, Hisham

    2015-01-01

    Replacing traditional stoves with advanced alternatives that burn more cleanly has the potential to ameliorate major health problems associated with indoor air pollution in developing countries. With a few exceptions, large government and charitable programs to distribute advanced stoves have not had the desired impact. Commercially-based distributions that seek cost recovery and even profits might plausibly do better, both because they encourage distributors to supply and promote products that people want and because they are based around properly-incentivized supply chains that could more be scalable, sustainable, and replicable. The sale in India of over 400,000 “Oorja” stoves to households from 2006 onwards represents the largest commercially-based distribution of a gasification-type advanced biomass stove. BP's Emerging Consumer Markets (ECM) division and then successor company First Energy sold this stove and the pelletized biomass fuel on which it operates. We assess the success of this effort and the role its commercial aspect played in outcomes using a survey of 998 households in areas of Maharashtra and Karnataka where the stove was sold as well as detailed interviews with BP and First Energy staff. Statistical models based on this data indicate that Oorja purchase rates were significantly influenced by the intensity of Oorja marketing in a region as well as by pre-existing stove mix among households. The highest rate of adoption came from LPG-using households for which Oorja's pelletized biomass fuel reduced costs. Smoke- and health-related messages from Oorja marketing did not significantly influence the purchase decision, although they did appear to affect household perceptions about smoke. By the time of our survey, only 9% of households that purchased Oorja were still using the stove, the result in large part of difficulties First Energy encountered in developing a viable supply chain around low-cost procurement of “agricultural waste” to

  14. Parametric Evaluation of Large-Scale High-Temperature Electrolysis Hydrogen Production Using Different Advanced Nuclear Reactor Heat Sources

    SciTech Connect

    Edwin A. Harvego; Michael G. McKellar; James E. O'Brien; J. Stephen Herring

    2009-09-01

    High Temperature Electrolysis (HTE), when coupled to an advanced nuclear reactor capable of operating at reactor outlet temperatures of 800 °C to 950 °C, has the potential to efficiently produce the large quantities of hydrogen needed to meet future energy and transportation needs. To evaluate the potential benefits of nuclear-driven hydrogen production, the UniSim process analysis software was used to evaluate different reactor concepts coupled to a reference HTE process design concept. The reference HTE concept included an Intermediate Heat Exchanger and intermediate helium loop to separate the reactor primary system from the HTE process loops and additional heat exchangers to transfer reactor heat from the intermediate loop to the HTE process loops. The two process loops consisted of the water/steam loop feeding the cathode side of a HTE electrolysis stack, and the sweep gas loop used to remove oxygen from the anode side. The UniSim model of the process loops included pumps to circulate the working fluids and heat exchangers to recover heat from the oxygen and hydrogen product streams to improve the overall hydrogen production efficiencies. The reference HTE process loop model was coupled to separate UniSim models developed for three different advanced reactor concepts (a high-temperature helium cooled reactor concept and two different supercritical CO2 reactor concepts). Sensitivity studies were then performed to evaluate the affect of reactor outlet temperature on the power cycle efficiency and overall hydrogen production efficiency for each of the reactor power cycles. The results of these sensitivity studies showed that overall power cycle and hydrogen production efficiencies increased with reactor outlet temperature, but the power cycles producing the highest efficiencies varied depending on the temperature range considered.

  15. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  16. Stitching Techniques Advance Optics Manufacturing

    NASA Technical Reports Server (NTRS)

    2010-01-01

    Because NASA depends on the fabrication and testing of large, high-quality aspheric (nonspherical) optics for applications like the James Webb Space Telescope, it sought an improved method for measuring large aspheres. Through Small Business Innovation Research (SBIR) awards from Goddard Space Flight Center, QED Technologies, of Rochester, New York, upgraded and enhanced its stitching technology for aspheres. QED developed the SSI-A, which earned the company an R&D 100 award, and also developed a breakthrough machine tool called the aspheric stitching interferometer. The equipment is applied to advanced optics in telescopes, microscopes, cameras, medical scopes, binoculars, and photolithography."

  17. Advances in recombinant antibody manufacturing.

    PubMed

    Kunert, Renate; Reinhart, David

    2016-04-01

    Since the first use of Chinese hamster ovary (CHO) cells for recombinant protein expression, production processes have steadily improved through numerous advances. In this review, we have highlighted several key milestones that have contributed to the success of CHO cells from the beginning of their use for monoclonal antibody (mAb) expression until today. The main factors influencing the yield of a production process are the time to accumulate a desired amount of biomass, the process duration, and the specific productivity. By comparing maximum cell densities and specific growth rates of various expression systems, we have emphasized the limiting parameters of different cellular systems and comprehensively described scientific approaches and techniques to improve host cell lines. Besides the quantitative evaluation of current systems, the quality-determining properties of a host cell line, namely post-translational modifications, were analyzed and compared to naturally occurring polyclonal immunoglobulin fractions from human plasma. In summary, numerous different expression systems for mAbs are available and also under scientific investigation. However, CHO cells are the most frequently investigated cell lines and remain the workhorse for mAb production until today. PMID:26936774

  18. Low-speed wind-tunnel tests of a large scale blended arrow advanced supersonic transport model having variable cycle engines and vectoring exhaust nozzles

    NASA Technical Reports Server (NTRS)

    Parlett, L. P.; Shivers, J. P.

    1976-01-01

    A low-speed wind-tunnel investigation was conducted in a full-scale tunnel to determine the performance and static stability and control characteristics of a large-scale model of a blended-arrow advanced supersonic transport configuration incorporating variable-cycle engines and vectoring exhaust nozzles. Configuration variables tested included: (1) engine mode (cruise or low-speed), (2) engine exit nozzle deflection, (3) leading-edge flap geometry, and (4) trailing-edge flap deflection. Test variables included values of C sub micron from 0 to 0.38, values of angle of attack from -10 degrees to 30 degrees, values of angle of sideslip, from -5 degrees to 5 degrees, and values of Reynolds number, from 3.5 million to 6.8 million.

  19. Low-speed wind-tunnel investigation of a large scale advanced arrow-wing supersonic transport configuration with engines mounted above wing for upper-surface blowing

    NASA Technical Reports Server (NTRS)

    Shivers, J. P.; Mclemore, H. C.; Coe, P. L., Jr.

    1976-01-01

    Tests have been conducted in a full scale tunnel to determine the low speed aerodynamic characteristics of a large scale advanced arrow wing supersonic transport configuration with engines mounted above the wing for upper surface blowing. Tests were made over an angle of attack range of -10 deg to 32 deg, sideslip angles of + or - 5 deg, and a Reynolds number range of 3,530,000 to 7,330,000. Configuration variables included trailing edge flap deflection, engine jet nozzle angle, engine thrust coefficient, engine out operation, and asymmetrical trailing edge boundary layer control for providing roll trim. Downwash measurements at the tail were obtained for different thrust coefficients, tail heights, and at two fuselage stations.

  20. National Center for Advanced Manufacturing Overview

    NASA Technical Reports Server (NTRS)

    Vickers, John H.

    2000-01-01

    This paper presents a general overview of the National Center for Advanced Manufacturing, with an emphasis on Aerospace Materials, Processes and Environmental Technology. The topics include: 1) Background; 2) Mission; 3) Technology Development Approach; 4) Space Transportation Significance; 5) Partnering; 6) NCAM MAF Project; 7) NASA & Calhoun Community College; 8) Educational Development; and 9) Intelligent Synthesis Environment. This paper is presented in viewgraph form.

  1. Large Scale Computing

    NASA Astrophysics Data System (ADS)

    Capiluppi, Paolo

    2005-04-01

    Large Scale Computing is acquiring an important role in the field of data analysis and treatment for many Sciences and also for some Social activities. The present paper discusses the characteristics of Computing when it becomes "Large Scale" and the current state of the art for some particular application needing such a large distributed resources and organization. High Energy Particle Physics (HEP) Experiments are discussed in this respect; in particular the Large Hadron Collider (LHC) Experiments are analyzed. The Computing Models of LHC Experiments represent the current prototype implementation of Large Scale Computing and describe the level of maturity of the possible deployment solutions. Some of the most recent results on the measurements of the performances and functionalities of the LHC Experiments' testing are discussed.

  2. Development of Advanced Ceramic Manufacturing Technology

    SciTech Connect

    Pujari, V.K.

    2001-04-05

    Advanced structural ceramics are enabling materials for new transportation engine systems that have the potential for significantly reducing energy consumption and pollution in automobiles and heavy vehicles. Ceramic component reliability and performance have been demonstrated in previous U.S. DOE initiatives, but high manufacturing cost was recognized as a major barrier to commercialization. Norton Advanced Ceramics (NAC), a division of Saint-Gobain Industrial Ceramics, Inc. (SGIC), was selected to perform a major Advanced Ceramics Manufacturing Technology (ACMT) Program. The overall objectives of NAC's program were to design, develop, and demonstrate advanced manufacturing technology for the production of ceramic exhaust valves for diesel engines. The specific objectives were (1) to reduce the manufacturing cost by an order of magnitude, (2) to develop and demonstrate process capability and reproducibility, and (3) to validate ceramic valve performance, durability, and reliability. The program was divided into four major tasks: Component Design and Specification, Component Manufacturing Technology Development, Inspection and Testing, and Process Demonstration. A high-power diesel engine valve for the DDC Series 149 engine was chosen as the demonstration part for this program. This was determined to be an ideal component type to demonstrate cost-effective process enhancements, the beneficial impact of advanced ceramics on transportation systems, and near-term commercialization potential. The baseline valve material was NAC's NT451 SiAION. It was replaced, later in the program, by an alternate silicon nitride composition (NT551), which utilized a lower cost raw material and a simplified powder-processing approach. The material specifications were defined based on DDC's engine requirements, and the initial and final component design tasks were completed.

  3. Advanced Engineering Environments: Implications for Aerospace Manufacturing

    NASA Technical Reports Server (NTRS)

    Thomas, D.

    2001-01-01

    There are significant challenges facing today's aerospace industry. Global competition, more complex products, geographically-distributed design teams, demands for lower cost, higher reliability and safer vehicles, and the need to incorporate the latest technologies quicker all face the developer of aerospace systems. New information technologies offer promising opportunities to develop advanced engineering environments (AEEs) to meet these challenges. Significant advances in the state-of-the-art of aerospace engineering practice are envisioned in the areas of engineering design and analytical tools, cost and risk tools, collaborative engineering, and high-fidelity simulations early in the development cycle. These advances will enable modeling and simulation of manufacturing methods, which will in turn allow manufacturing considerations to be included much earlier in the system development cycle. Significant cost savings, increased quality, and decreased manufacturing cycle time are expected to result. This paper will give an overview of the NASA's Intelligent Synthesis Environment, the agency initiative to develop an AEE, with a focus on the anticipated benefits in aerospace manufacturing.

  4. Recent manufacturing advances for spiral bevel gears

    NASA Technical Reports Server (NTRS)

    Handschuh, Robert F.; Bill, Robert C.

    1991-01-01

    The U.S. Army Aviation Systems Command (AVSCOM), through the Propulsion Directorate at NASA LRC, has recently sponsored projects to advance the manufacturing process for spiral bevel gears. This type of gear is a critical component in rotary-wing propulsion systems. Two successfully completed contracted projects are described. The first project addresses the automated inspection of spiral bevel gears through the use of coordinate measuring machines. The second project entails the computer-numerical-control (CNC) conversion of a spiral bevel gear grinding machine that is used for all aerospace spiral bevel gears. The results of these projects are described with regard to the savings effected in manufacturing time.

  5. Recent manufacturing advances for spiral bevel gears

    NASA Technical Reports Server (NTRS)

    Handschuh, Robert F.; Bill, Robert C.

    1991-01-01

    The U.S. Army Aviation Systems Command (AVSCOM), through the Propulsion Directorate at NASA Lewis Research Center, has recently sponsored projects to advance the manufacturing process for spiral bevel gears. This type of gear is a critical component in rotary-wing propulsion systems. Two successfully completed contracted projects are described. The first project addresses the automated inspection of spiral bevel gears through the use of coordinate measuring machines. The second project entails the computer-numerical-control (CNC) conversion of a spiral bevel gear grinding machine that is used for all aerospace spiral bevel gears. The results of these projects are described with regard to the savings effected in manufacturing time.

  6. Large-Scale Disasters

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  7. National Center for Advanced Manufacturing Overview

    NASA Technical Reports Server (NTRS)

    Vickers, J.

    2001-01-01

    The National Center for Advanced Manufacturing (NCAM) is a strategy, organization, and partnership focused on long-term technology development. The NCAM initially will be a regional partnership, however the intent is national in scope. Benchmarking is needed to follow the concept to the finished project, not using trial and error. Significant progress has been made to date, and NCAM is setting the vision for the future.

  8. NASA's National Center for Advanced Manufacturing

    NASA Technical Reports Server (NTRS)

    Vickers, John

    2003-01-01

    NASA has designated the Principal Center Assignment to the Marshall Space Flight Center (MSFC) for implementation of the National Center for Advanced Manufacturing (NCAM). NCAM is NASA s leading resource for the aerospace manufacturing research, development, and innovation needs that are critical to the goals of the Agency. Through this initiative NCAM s people work together with government, industry, and academia to ensure the technology base and national infrastructure are available to develop innovative manufacturing technologies with broad application to NASA Enterprise programs, and U.S. industry. Educational enhancements are ever-present within the NCAM focus to promote research, to inspire participation and to support education and training in manufacturing. Many important accomplishments took place during 2002. Through NCAM, NASA was among five federal agencies involved in manufacturing research and development (R&D) to launch a major effort to exchange information and cooperate directly to enhance the payoffs from federal investments. The Government Agencies Technology Exchange in Manufacturing (GATE-M) is the only active effort to specifically and comprehensively address manufacturing R&D across the federal government. Participating agencies include the departments of Commerce (represented by the National Institute of Standards and Technology), Defense, and Energy, as well as the National Science Foundation and NASA. MSFC s ongoing partnership with the State of Louisiana, the University of New Orleans, and Lockheed Martin Corporation at the Michoud Assembly Facility (MAF) progressed significantly. Major capital investments were initiated for world-class equipment additions including a universal friction stir welding system, composite fiber placement machine, five-axis machining center, and ten-axis laser ultrasonic nondestructive test system. The NCAM consortium of five universities led by University of New Orleans with Mississippi State University

  9. Cooking practices, air quality, and the acceptability of advanced cookstoves in Haryana, India: an exploratory study to inform large-scale interventions

    PubMed Central

    Mukhopadhyay, Rupak; Sambandam, Sankar; Pillarisetti, Ajay; Jack, Darby; Mukhopadhyay, Krishnendu; Balakrishnan, Kalpana; Vaswani, Mayur; Bates, Michael N.; Kinney, Patrick L.; Arora, Narendra; Smith, Kirk R.

    2012-01-01

    Background In India, approximately 66% of households rely on dung or woody biomass as fuels for cooking. These fuels are burned under inefficient conditions, leading to household air pollution (HAP) and exposure to smoke containing toxic substances. Large-scale intervention efforts need to be informed by careful piloting to address multiple methodological and sociocultural issues. This exploratory study provides preliminary data for such an exercise from Palwal District, Haryana, India. Methods Traditional cooking practices were assessed through semi-structured interviews in participating households. Philips and Oorja, two brands of commercially available advanced cookstoves with small blowers to improve combustion, were deployed in these households. Concentrations of particulate matter (PM) with a diameter <2.5 μm (PM2.5) and carbon monoxide (CO) related to traditional stove use were measured using real-time and integrated personal, microenvironmental samplers for optimizing protocols to evaluate exposure reduction. Qualitative data on acceptability of advanced stoves and objective measures of stove usage were also collected. Results Twenty-eight of the thirty-two participating households had outdoor primary cooking spaces. Twenty households had liquefied petroleum gas (LPG) but preferred traditional stoves as the cost of LPG was higher and because meals cooked on traditional stoves were perceived to taste better. Kitchen area concentrations and kitchen personal concentrations assessed during cooking events were very high, with respective mean PM2.5 concentrations of 468 and 718 µg/m3. Twenty-four hour outdoor concentrations averaged 400 µg/m3. Twenty-four hour personal CO concentrations ranged between 0.82 and 5.27 ppm. The Philips stove was used more often and for more hours than the Oorja. Conclusions The high PM and CO concentrations reinforce the need for interventions that reduce HAP exposure in the aforementioned community. Of the two stoves tested

  10. Advanced Algorithms and High-Performance Testbed for Large-Scale Site Characterization and Subsurface Target Detecting Using Airborne Ground Penetrating SAR

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Collier, James B.; Citak, Ari

    1997-01-01

    A team of US Army Corps of Engineers, Omaha District and Engineering and Support Center, Huntsville, let Propulsion Laboratory (JPL), Stanford Research Institute (SRI), and Montgomery Watson is currently in the process of planning and conducting the largest ever survey at the Former Buckley Field (60,000 acres), in Colorado, by using SRI airborne, ground penetrating, Synthetic Aperture Radar (SAR). The purpose of this survey is the detection of surface and subsurface Unexploded Ordnance (UXO) and in a broader sense the site characterization for identification of contaminated as well as clear areas. In preparation for such a large-scale survey, JPL has been developing advanced algorithms and a high-performance restbed for processing of massive amount of expected SAR data from this site. Two key requirements of this project are the accuracy (in terms of UXO detection) and speed of SAR data processing. The first key feature of this testbed is a large degree of automation and a minimum degree of the need for human perception in the processing to achieve an acceptable processing rate of several hundred acres per day. For accurate UXO detection, novel algorithms have been developed and implemented. These algorithms analyze dual polarized (HH and VV) SAR data. They are based on the correlation of HH and VV SAR data and involve a rather large set of parameters for accurate detection of UXO. For each specific site, this set of parameters can be optimized by using ground truth data (i.e., known surface and subsurface UXOs). In this paper, we discuss these algorithms and their successful application for detection of surface and subsurface anti-tank mines by using a data set from Yuma proving Ground, A7, acquired by SRI SAR.

  11. Advanced algorithms and high-performance testbed for large-scale site characterization and subsurface target detection using airborne ground-penetrating SAR

    NASA Astrophysics Data System (ADS)

    Fijany, Amir; Collier, James B.; Citak, Ari

    1999-08-01

    A team of US Army Corps of Engineers, Omaha District and Engineering and Support Center, Huntsville, JPL, Stanford Research Institute (SRI), and Montgomery Watson is currently in the process of planning and conducting the largest ever survey at the Former Buckley Field, in Colorado, by using SRI airborne, ground penetrating, SAR. The purpose of this survey is the detection of surface and subsurface Unexploded Ordnance (UXO) and in a broader sense the site characterization for identification of contaminated as well as clear areas. In preparation for such a large-scale survey, JPL has been developing advanced algorithms and a high-performance testbed for processing of massive amount of expected SAR data from this site. Two key requirements of this project are the accuracy and speed of SAR data processing. The first key feature of this testbed is a large degree of automation and maximum degree of the need for human perception in the processing to achieve an acceptable processing rate of several hundred acres per day. For accuracy UXO detection, novel algorithms have been developed and implemented. These algorithms analyze dual polarized SAR data. They are based on the correlation of HH and VV SAR data and involve a rather large set of parameters for accurate detection of UXO. For each specific site, this set of parameters can be optimized by using ground truth data. In this paper, we discuss these algorithms and their successful application for detection of surface and subsurface anti-tank mines by using a data set from Yuma Proving Ground, AZ, acquired by SRI SAR.

  12. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  13. Large scale traffic simulations

    SciTech Connect

    Nagel, K.; Barrett, C.L.; Rickert, M.

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  14. Precision Casting via Advanced Simulation and Manufacturing

    NASA Technical Reports Server (NTRS)

    1997-01-01

    A two-year program was conducted to develop and commercially implement selected casting manufacturing technologies to enable significant reductions in the costs of castings, increase the complexity and dimensional accuracy of castings, and reduce the development times for delivery of high quality castings. The industry-led R&D project was cost shared with NASA's Aerospace Industry Technology Program (AITP). The Rocketdyne Division of Boeing North American, Inc. served as the team lead with participation from Lockheed Martin, Ford Motor Company, Howmet Corporation, PCC Airfoils, General Electric, UES, Inc., University of Alabama, Auburn University, Robinson, Inc., Aracor, and NASA-LeRC. The technical effort was organized into four distinct tasks. The accomplishments reported herein. Task 1.0 developed advanced simulation technology for core molding. Ford headed up this task. On this program, a specialized core machine was designed and built. Task 2.0 focused on intelligent process control for precision core molding. Howmet led this effort. The primary focus of these experimental efforts was to characterize the process parameters that have a strong impact on dimensional control issues of injection molded cores during their fabrication. Task 3.0 developed and applied rapid prototyping to produce near net shape castings. Rocketdyne was responsible for this task. CAD files were generated using reverse engineering, rapid prototype patterns were fabricated using SLS and SLA, and castings produced and evaluated. Task 4.0 was aimed at developing technology transfer. Rocketdyne coordinated this task. Casting related technology, explored and evaluated in the first three tasks of this program, was implemented into manufacturing processes.

  15. Measurement of Unsteady Blade Surface Pressure on a Single Rotation Large Scale Advanced Prop-fan with Angular and Wake Inflow at Mach Numbers from 0.02 to 0.70

    NASA Technical Reports Server (NTRS)

    Bushnell, P.; Gruber, M.; Parzych, D.

    1988-01-01

    Unsteady blade surface pressure data for the Large-Scale Advanced Prop-Fan (LAP) blade operation with angular inflow, wake inflow and uniform flow over a range of inflow Mach numbers of 0.02 to 0.70 is provided. The data are presented as Fourier coefficients for the first 35 harmonics of shaft rotational frequency. Also presented is a brief discussion of the unsteady blade response observed at takeoff and cruise conditions with angular and wake inflow.

  16. Organizational Considerations for Advanced Manufacturing Technology

    ERIC Educational Resources Information Center

    DeRuntz, Bruce D.; Turner, Roger M.

    2003-01-01

    In the last several decades, the United States has experienced a decline in productivity, while the world has seen a maturation of the global marketplace. Nations have moved manufacturing strategy and process technology issues to the top of management priority lists. The issues surrounding manufacturing technologies and their implementations have…

  17. In flight measurement of steady and unsteady blade surface pressure of a single rotation large scale advanced prop-fan installed on the PTA aircraft

    NASA Technical Reports Server (NTRS)

    Parzych, D.; Boyd, L.; Meissner, W.; Wyrostek, A.

    1991-01-01

    An experiment was performed by Hamilton Standard, Division of United Technologies Corporation, under contract by LeRC, to measure the blade surface pressure of a large scale, 8 blade model prop-fan in flight. The test bed was the Gulfstream 2 Prop-Fan Test Assessment (PTA) aircraft. The objective of the test was to measure the steady and periodic blade surface pressure resulting from three different Prop-Fan air inflow angles at various takeoff and cruise conditions. The inflow angles were obtained by varying the nacelle tilt angles, which ranged from -3 to +2 degrees. A range of power loadings, tip speeds, and altitudes were tested at each nacelle tilt angle over the flight Mach number range of 0.30 to 0.80. Unsteady blade pressure data tabulated as Fourier coefficients for the first 35 harmonics of shaft rotational frequency and the steady (non-varying) pressure component are presented.

  18. Manufacture of radiopharmaceuticals-recent advances

    SciTech Connect

    Krieger, J.K.

    1996-12-31

    Trends in radiopharmaceutical manufacturing have been influenced by the demands of the regulatory agencies, the demands of the customers, and the ever-increasing complexity of new products. Process improvements resulting from automation in the production of radionuclides for diagnostic imaging products, {sup 99m}/Tc generators, {sup 67}Ga, and {sup 201}Tl have been introduced to enhance compliance with current good manufacturing practices and to improve worker safety, both by reducing dose in accord with as low as reasonably achievable levels of radiation and by providing an ergonomically sound environment. Tighter process control has resulted in less lot-to-lot variability and ensures reliability of supply. Reduced manufacturing lapse time for {sup 99m}Tc generators minimizes decay and conserves the supply of {sup 99}Mo. Automation has resulted in an even greater degree of remote operation and has led to reductions in dose, improved process control, and faster throughput in the manufacture of radionuclides.

  19. Advances in solid dosage form manufacturing technology.

    PubMed

    Andrews, Gavin P

    2007-12-15

    Currently, the pharmaceutical and healthcare industries are moving through a period of unparalleled change. Major multinational pharmaceutical companies are restructuring, consolidating, merging and more importantly critically assessing their competitiveness to ensure constant growth in an ever-more demanding market where the cost of developing novel products is continuously increasing. The pharmaceutical manufacturing processes currently in existence for the production of solid oral dosage forms are associated with significant disadvantages and in many instances provide many processing problems. Therefore, it is well accepted that there is an increasing need for alternative processes to dramatically improve powder processing, and more importantly to ensure that acceptable, reproducible solid dosage forms can be manufactured. Consequently, pharmaceutical companies are beginning to invest in innovative processes capable of producing solid dosage forms that better meet the needs of the patient while providing efficient manufacturing operations. This article discusses two emerging solid dosage form manufacturing technologies, namely hot-melt extrusion and fluidized hot-melt granulation. PMID:17855217

  20. Advances in gamma titanium aluminides and their manufacturing techniques

    NASA Astrophysics Data System (ADS)

    Kothari, Kunal; Radhakrishnan, Ramachandran; Wereley, Norman M.

    2012-11-01

    Gamma titanium aluminides display attractive properties for high temperature applications. For over a decade in the 1990s, the attractive properties of titanium aluminides were outweighed by difficulties encountered in processing and machining at room temperature. But advances in manufacturing technologies, deeper understanding of titanium aluminides microstructure, deformation mechanisms, and advances in micro-alloying, has led to the production of gamma titanium aluminide sheets. An in-depth review of key advances in gamma titanium aluminides is presented, including microstructure, deformation mechanisms, and alloy development. Traditional manufacturing techniques such as ingot metallurgy and investment casting are reviewed and advances via powder metallurgy based manufacturing techniques are discussed. Finally, manufacturing challenges facing gamma titanium aluminides, as well as avenues to overcome them, are discussed.

  1. Advanced Blade Manufacturing Project - Final Report

    SciTech Connect

    POORE, ROBERT Z.

    1999-08-01

    The original scope of the project was to research improvements to the processes and materials used in the manufacture of wood-epoxy blades, conduct tests to qualify any new material or processes for use in blade design and subsequently build and test six blades using the improved processes and materials. In particular, ABM was interested in reducing blade cost and improving quality. In addition, ABM needed to find a replacement material for the mature Douglas fir used in the manufacturing process. The use of mature Douglas fir is commercially unacceptable because of its limited supply and environmental concerns associated with the use of mature timber. Unfortunately, the bankruptcy of FloWind in June 1997 and a dramatic reduction in AWT sales made it impossible for ABM to complete the full scope of work. However, sufficient research and testing were completed to identify several promising changes in the blade manufacturing process and develop a preliminary design incorporating these changes.

  2. Evaluation of advanced polymers for additive manufacturing

    SciTech Connect

    Rios, Orlando; Morrison, Crystal

    2015-09-01

    The goal of this Manufacturing Demonstration Facility (MDF) technical collaboration project between Oak Ridge National Laboratory (ORNL) and PPG Industries, Inc. was to evaluate the feasibility of using conventional coatings chemistry and technology to build up material layer-by-layer. The PPG-ORNL study successfully demonstrated that polymeric coatings formulations may overcome many limitations of common thermoplastics used in additive manufacturing (AM), allow lightweight nozzle design for material deposition and increase build rate. The materials effort focused on layer-by-layer deposition of coatings with each layer fusing together. The combination of materials and deposition results in an additively manufactured build that has sufficient mechanical properties to bear the load of additional layers, yet is capable of bonding across the z-layers to improve build direction strength. The formulation properties were tuned to enable a novel, high-throughput deposition method that is highly scalable, compatible with high loading of reinforcing fillers, and is inherently low-cost.

  3. Advanced manufacturing technologies on color plasma displays

    NASA Astrophysics Data System (ADS)

    Betsui, Keiichi

    2000-06-01

    The mass production of the color plasma display started from 1996. However, since the price of the panel is still expensive, PDPs are not in widespread use at home. It is necessary to develop the new and low-cost manufacturing technologies to reduce the price of the panel. This paper describes some of the features of new fabrication technologies of PDPs.

  4. Energy intensity, electricity consumption, and advanced manufacturing-technology usage

    SciTech Connect

    Doms, M.E.; Dunne, T.

    1995-07-01

    This article reports on the relationship between the usage of advanced manufacturing technologies (AMTs) and energy consumption patterns in manufacturing plants. Using data from the Survey of Manufacturing Technology and the 1987 Census of Manufactures, we model the energy intensity and the electricity intensity of plants as functions of AMT usage and plant age. The main findings are that plants that utilize AMTs are less-energy intensive than plants not using AMTs, but consume proportionately more electricity as a fuel source. Additionally, older plants are generally more energy intensive and rely on fossil fuels to a greater extent than younger plants. 25 refs., 3 tabs.

  5. A review of advanced manufacturing technology

    NASA Astrophysics Data System (ADS)

    Broughton, T.

    1981-03-01

    Joining techniques, hot forming technology, forging technology, investment casting, small cooling hole manufacturing, combustor technology, quality assurance, and chip forming machining of gas turbine engine components are discussed. Electron and laser beam welding; laser hard facing techniques; automatic TIG and plasma welding; diffusion brazing of titanium and nickel alloys; heated die forming: blow forming; superplastic forming; fan and compressor blade forging; and wheel and disk forging from powder superalloys are described.

  6. Open architecture controllers for advanced manufacturing

    SciTech Connect

    Gore, R.A.

    1994-03-01

    The application of intelligent control systems to the real world of machining and manufacturing will benefit form the presence of open architecture control systems on the machines or the processes. The ability to modify the control system as the process or product changes can be essential to the success of the application of neural net or fuzzy logic controllers. The effort at Los Alamos to obtain a commercially available open architecture machine tool controller is described.

  7. Intelligent multi-agent coordination system for advanced manufacturing

    NASA Astrophysics Data System (ADS)

    Maturana, Francisco P.; Balasubramanian, Sivaram; Norrie, Douglas H.

    1997-12-01

    Global competition and rapidly changing customer requirements are forcing major changes in the production styles and configuration of manufacturing organizations. Agent-based systems are showing considerable potential as a new paradigm for agile manufacturing systems. With this approach, centralized and sequential manufacturing planning, scheduling, and control systems may be replaced by distributed intelligent systems to facilitate flexible and rapid response to changing production styles and variations in product requirements. In this paper, the characteristics and components of such a multi-agent architecture for advanced manufacturing are described. This architecture addresses agility in terms of the ability of the manufacturing system to solve manufacturing tasks using virtual enterprise mechanisms while maintaining concurrent information processing and control.

  8. Galaxy clustering on large scales.

    PubMed

    Efstathiou, G

    1993-06-01

    I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe. PMID:11607400

  9. Very Large Scale Integration (VLSI).

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  10. Large-scale planar lightwave circuits

    NASA Astrophysics Data System (ADS)

    Bidnyk, Serge; Zhang, Hua; Pearson, Matt; Balakrishnan, Ashok

    2011-01-01

    By leveraging advanced wafer processing and flip-chip bonding techniques, we have succeeded in hybrid integrating a myriad of active optical components, including photodetectors and laser diodes, with our planar lightwave circuit (PLC) platform. We have combined hybrid integration of active components with monolithic integration of other critical functions, such as diffraction gratings, on-chip mirrors, mode-converters, and thermo-optic elements. Further process development has led to the integration of polarization controlling functionality. Most recently, all these technological advancements have been combined to create large-scale planar lightwave circuits that comprise hundreds of optical elements integrated on chips less than a square inch in size.

  11. Materials/manufacturing element of the Advanced Turbine Systems Program

    SciTech Connect

    Karnitz, M.A.; Holcomb, R.S.; Wright, I.G.

    1995-10-01

    The technology based portion of the Advanced Turbine Systems Program (ATS) contains several subelements which address generic technology issues for land-based gas-turbine systems. One subelement is the Materials/Manufacturing Technology Program which is coordinated by DOE-Oak Ridge Operations and Oak Ridge National Laboratory (ORNL). The work in this subelement is being performed predominantly by industry with assistance from universities and the national laboratories. Projects in this subelement are aimed toward hastening the incorporation of new materials and components in gas turbines. A materials/manufacturing plan was developed in FY 1994 with input from gas turbine manufacturers, materials suppliers, universities, and government laboratories. The plan outlines seven major subelements which focus on materials issues and manufacturing processes. Work is currently under way in four of the seven major subelements. There are now major projects on coatings and process development, scale-up of single crystal airfoil manufacturing technology, materials characterization, and technology information exchange.

  12. 76 FR 43983 - Request for Information on How To Structure Proposed New Program: Advanced Manufacturing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-22

    ... Program: Advanced Manufacturing Technology Consortia (AMTech) AGENCY: National Institute of Standards and... structure a new public-private partnership program, the Advanced Manufacturing Technology Consortia (AMTech... or newly created industry-led consortia to develop precompetitive enabling manufacturing...

  13. Evaluation of targeted therapies in advanced breast cancer: the need for large-scale molecular screening and transformative clinical trial designs.

    PubMed

    Fadoukhair, Z; Zardavas, D; Chad, M A; Goulioti, T; Aftimos, P; Piccart, M

    2016-04-01

    Breast cancer (BC) has been classified into four intrinsic subtypes through seminal studies employing gene expression profiling analysis of primary tumours, namely the luminal A and B subtypes, the human epidermal growth factor receptor 2-like subtype and the basal-like subtype. More recently, the emergence of high-throughput genomic sequencing techniques, such as next-generation or massive parallel sequencing has expanded our understanding of the complex genomic landscapes of BC, with marked intertumour heterogeneity seen among different patients. In addition, increasing evidence indicates intratumour heterogeneity, with molecular differences observed within one patient, both spatially and longitudinally. These phenomena have an impact on the clinical development of molecularly targeted agents, with the classical paradigm of population-based clinical trials being no longer efficient. In the era of genomically driven oncology, three complementary tools can accelerate the clinical development of targeted agents for advanced BC as follows: (i) the implementation of molecular profiling of metastatic tumour lesions, as exemplified by the AURORA (Aiming to Understand the Molecular Aberrations in Metastatic Breast Cancer) programme; (ii) serial assessments of circulating tumour DNA, allowing a more thorough molecular interrogation of metastatic tumour burden; and (iii) new innovative clinical trial designs able to address the challenges of the increasing molecular fragmentation of BC. PMID:26119941

  14. Soft computing in design and manufacturing of advanced materials

    NASA Technical Reports Server (NTRS)

    Cios, Krzysztof J.; Baaklini, George Y; Vary, Alex

    1993-01-01

    The potential of fuzzy sets and neural networks, often referred to as soft computing, for aiding in all aspects of manufacturing of advanced materials like ceramics is addressed. In design and manufacturing of advanced materials, it is desirable to find which of the many processing variables contribute most to the desired properties of the material. There is also interest in real time quality control of parameters that govern material properties during processing stages. The concepts of fuzzy sets and neural networks are briefly introduced and it is shown how they can be used in the design and manufacturing processes. These two computational methods are alternatives to other methods such as the Taguchi method. The two methods are demonstrated by using data collected at NASA Lewis Research Center. Future research directions are also discussed.

  15. Sulfur-free cleaning strategy for advanced mask manufacturing

    NASA Astrophysics Data System (ADS)

    Kindt, Louis; Watts, Andrew; Burnham, Jay; Aaskov, William

    2006-10-01

    Existing cleaning technology using sulfuric acid based chemistry has served the mask industry quite well over the years. However, the existence of residue on mask surfaces is becoming more and more of a problem at the high energy wavelengths used in lithography tool for wafer manufacturing. This is evident by the emergence of sub-pellicle defect growth and backside hazing issues. A large source of residual contamination on the surface of masks is from the mask manufacturing process, particularly the cleaning portion involving sulfuric acid. Cleaning strategies can be developed that eliminate the use of sulfuric acid in the cleaning process for advanced photomasks and alternative processes can be used for cleaning masks at various stages of the manufacturing process. Implementation of these new technologies into manufacturing will be discussed as will the resulting improvements, advantages, and disadvantages over pre-existing mask cleaning processes.

  16. Composite intermediate case manufacturing scale-up for advanced engines

    NASA Technical Reports Server (NTRS)

    Ecklund, Rowena H.

    1992-01-01

    This Manufacturing Technology for Propulsion Program developed a process to produce a composite intermediate case for advanced gas turbine engines. The method selected to manufacture this large, complex part uses hard tooling for surfaces in the airflow path and trapped rubber to force the composite against the mold. Subelements were manufactured and tested to verify the selected design, tools, and processes. The most significant subelement produced was a half-scale version of a composite intermediate case. The half-scale subelement maintained the geometry and key dimensions of the full-scale case, allowing relevant process development and structural verification testing to be performed on the subelement before manufacturing the first full-scale case.

  17. Microfluidic large-scale integration.

    PubMed

    Thorsen, Todd; Maerkl, Sebastian J; Quake, Stephen R

    2002-10-18

    We developed high-density microfluidic chips that contain plumbing networks with thousands of micromechanical valves and hundreds of individually addressable chambers. These fluidic devices are analogous to electronic integrated circuits fabricated using large-scale integration. A key component of these networks is the fluidic multiplexor, which is a combinatorial array of binary valve patterns that exponentially increases the processing power of a network by allowing complex fluid manipulations with a minimal number of inputs. We used these integrated microfluidic networks to construct the microfluidic analog of a comparator array and a microfluidic memory storage device whose behavior resembles random-access memory. PMID:12351675

  18. The Effect of the Implementation of Advanced Manufacturing Technologies on Training in the Manufacturing Sector

    ERIC Educational Resources Information Center

    Castrillon, Isabel Dieguez; Cantorna, Ana I. Sinde

    2005-01-01

    Purpose: The aim of this article is to gain insight into some of the factors that determine personnel-training efforts in companies introducing advanced manufacturing technologies (AMTs). The study provides empirical evidence from a sector with high rates of technological modernisation. Design/methodology/approach: "Ad hoc" survey of 90 firms in…

  19. Advanced Initiation Systems Manufacturing Level 2 Milestone Completion Summary

    SciTech Connect

    Chow, R; Schmidt, M

    2009-10-01

    Milestone Description - Advanced Initiation Systems Detonator Design and Prototype. Milestone Grading Criteria - Design new generation chip slapper detonator and manufacture a prototype using advanced manufacturing processes, such as all-dry chip metallization and solvent-less flyer coatings. The advanced processes have been developed for manufacturing detonators with high material compatibility and reliability to support future LEPs, e.g. the B61, and new weapons systems. Perform velocimetry measurements to determine slapper velocity as a function of flight distance. A prototype detonator assembly and stripline was designed for low-energy chip slappers. Pictures of the prototype detonator and stripline are shown. All-dry manufacturing processes were used to address compatibility issues. KCP metallized the chips in a physical vapor deposition system through precision-aligned shadow masks. LLNL deposited a solvent-less polyimide flyer with a processes called SLIP, which stands for solvent-less vapor deposition followed by in-situ polymerization. LANL manufactured the high-surface-area (HSA) high explosive (HE) pellets. Test fires of two chip slapper designs, radius and bowtie, were performed at LLNL in the High Explosives Application Facility (HEAF). Test fires with HE were conducted to establish the threshold firing voltages. pictures of the chip slappers before and after test fires are shown. Velocimetry tests were then performed to obtain slapper velocities at or above the threshold firing voltages. Figure 5 shows the slapper velocity as a function of distance and time at the threshold voltage, for both radius and bowtie bridge designs. Both designs were successful at initiating the HE at low energy levels. Summary of Accomplishments are: (1) All-dry process for chip manufacture developed; (2) Solventless process for slapper materials developed; (3) High-surface area explosive pellets developed; (4) High performance chip slappers developed; (5) Low-energy chip

  20. Process development status report for advanced manufacturing projects

    SciTech Connect

    Brinkman, J.R.; Homan, D.A.

    1990-03-30

    This is the final status report for the approved Advanced Manufacturing Projects for FY 1989. Five of the projects were begun in FY 1987, one in FY 1988, and one in FY 1989. The approved projects cover technology areas in welding, explosive material processing and evaluation, ion implantation, and automated manufacturing. It is expected that the successful completion of these projects well result in improved quality and/or reduced cost for components produced by Mound. Those projects not brought to completion will be continued under Process development in FY 1990.

  1. Materials/manufacturing element of the Advanced Turbine System Program

    SciTech Connect

    Karnitz, M.A.; Devan, J.H.; Holcomb, R.S.; Ferber, M.K.; Harrison, R.W.

    1994-08-01

    One of the supporting elements of the Advanced Turbine Systems (ATS) Program is the materials/manufacturing technologies task. The objective of this element is to address critical materials issues for both industrial and utility gas turbines. DOE Oak Ridge Operations Office (ORO) will manage this element of the program, and a team from DOE-ORO and Oak Ridge National Laboratory is coordinating the planning for the materials/manufacturing effort. This paper describes that planning activity which is in the early stages.

  2. Large scale topography of Io

    NASA Technical Reports Server (NTRS)

    Gaskell, R. W.; Synnott, S. P.

    1987-01-01

    To investigate the large scale topography of the Jovian satellite Io, both limb observations and stereographic techniques applied to landmarks are used. The raw data for this study consists of Voyager 1 images of Io, 800x800 arrays of picture elements each of which can take on 256 possible brightness values. In analyzing this data it was necessary to identify and locate landmarks and limb points on the raw images, remove the image distortions caused by the camera electronics and translate the corrected locations into positions relative to a reference geoid. Minimizing the uncertainty in the corrected locations is crucial to the success of this project. In the highest resolution frames, an error of a tenth of a pixel in image space location can lead to a 300 m error in true location. In the lowest resolution frames, the same error can lead to an uncertainty of several km.

  3. Challenges for Large Scale Simulations

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    2010-03-01

    With computational approaches becoming ubiquitous the growing impact of large scale computing on research influences both theoretical and experimental work. I will review a few examples in condensed matter physics and quantum optics, including the impact of computer simulations in the search for supersolidity, thermometry in ultracold quantum gases, and the challenging search for novel phases in strongly correlated electron systems. While only a decade ago such simulations needed the fastest supercomputers, many simulations can now be performed on small workstation clusters or even a laptop: what was previously restricted to a few experts can now potentially be used by many. Only part of the gain in computational capabilities is due to Moore's law and improvement in hardware. Equally impressive is the performance gain due to new algorithms - as I will illustrate using some recently developed algorithms. At the same time modern peta-scale supercomputers offer unprecedented computational power and allow us to tackle new problems and address questions that were impossible to solve numerically only a few years ago. While there is a roadmap for future hardware developments to exascale and beyond, the main challenges are on the algorithmic and software infrastructure side. Among the problems that face the computational physicist are: the development of new algorithms that scale to thousands of cores and beyond, a software infrastructure that lifts code development to a higher level and speeds up the development of new simulation programs for large scale computing machines, tools to analyze the large volume of data obtained from such simulations, and as an emerging field provenance-aware software that aims for reproducibility of the complete computational workflow from model parameters to the final figures. Interdisciplinary collaborations and collective efforts will be required, in contrast to the cottage-industry culture currently present in many areas of computational

  4. Integration of Advanced Simulation and Visualization for Manufacturing Process Optimization

    NASA Astrophysics Data System (ADS)

    Zhou, Chenn; Wang, Jichao; Tang, Guangwu; Moreland, John; Fu, Dong; Wu, Bin

    2016-05-01

    The integration of simulation and visualization can provide a cost-effective tool for process optimization, design, scale-up and troubleshooting. The Center for Innovation through Visualization and Simulation (CIVS) at Purdue University Northwest has developed methodologies for such integration with applications in various manufacturing processes. The methodologies have proven to be useful for virtual design and virtual training to provide solutions addressing issues on energy, environment, productivity, safety, and quality in steel and other industries. In collaboration with its industrial partnerships, CIVS has provided solutions to companies, saving over US38 million. CIVS is currently working with the steel industry to establish an industry-led Steel Manufacturing Simulation and Visualization Consortium through the support of National Institute of Standards and Technology AMTech Planning Grant. The consortium focuses on supporting development and implementation of simulation and visualization technologies to advance steel manufacturing across the value chain.

  5. Advanced composite aileron for L-1011 transport aircraft: Aileron manufacture

    NASA Technical Reports Server (NTRS)

    Dunning, E. G.; Cobbs, W. L.; Legg, R. L.

    1981-01-01

    The fabrication activities of the Advanced Composite Aileron (ACA) program are discussed. These activities included detail fabrication, manufacturing development, assembly, repair and quality assurance. Five ship sets of ailerons were manufactured. The detail fabrication effort of ribs, spar and covers was accomplished on male tools to a common cure cycle. Graphite epoxy tape and fabric and syntactic epoxy materials were utilized in the fabrication. The ribs and spar were net cured and required no post cure trim. Material inconsistencies resulted in manufacturing development of the front spar during the production effort. The assembly effort was accomplished in subassembly and assembly fixtures. The manual drilling system utilized a dagger type drill in a hydraulic feed control hand drill. Coupon testing for each detail was done.

  6. Spacesuit glove manufacturing enhancements through the use of advanced technologies

    NASA Technical Reports Server (NTRS)

    Cadogan, David; Bradley, David; Kosmo, Joseph

    1993-01-01

    The sucess of astronauts performing extravehicular activity (EVA) on orbit is highly dependent upon the performance of their spacesuit gloves.A study has recently been conducted to advance the development and manufacture of spacesuit gloves. The process replaces the manual techniques of spacesuit glove manufacture by utilizing emerging technologies such as laser scanning, Computer Aided Design (CAD), computer generated two-dimensional patterns from three-dimensionl surfaces, rapid prototyping technology, and laser cutting of materials, to manufacture the new gloves. Results of the program indicate that the baseline process will not increase the cost of the gloves as compared to the existing styles, and in production, may reduce the cost of the gloves. perhaps the most important outcome of the Laserscan process is that greater accuracy and design control can be realized. Greater accuracy was achieved in the baseline anthropometric measurement and CAD data measurement which subsequently improved the design feature. This effectively enhances glove performance through better fit and comfort.

  7. Spacesuit glove manufacturing enhancements through the use of advanced technologies

    NASA Astrophysics Data System (ADS)

    Cadogan, David; Bradley, David; Kosmo, Joseph

    The sucess of astronauts performing extravehicular activity (EVA) on orbit is highly dependent upon the performance of their spacesuit gloves.A study has recently been conducted to advance the development and manufacture of spacesuit gloves. The process replaces the manual techniques of spacesuit glove manufacture by utilizing emerging technologies such as laser scanning, Computer Aided Design (CAD), computer generated two-dimensional patterns from three-dimensionl surfaces, rapid prototyping technology, and laser cutting of materials, to manufacture the new gloves. Results of the program indicate that the baseline process will not increase the cost of the gloves as compared to the existing styles, and in production, may reduce the cost of the gloves. perhaps the most important outcome of the Laserscan process is that greater accuracy and design control can be realized. Greater accuracy was achieved in the baseline anthropometric measurement and CAD data measurement which subsequently improved the design feature. This effectively enhances glove performance through better fit and comfort.

  8. Measurement of the steady surface pressure distribution on a single rotation large scale advanced prop-fan blade at Mach numbers from 0.03 to 0.78

    NASA Technical Reports Server (NTRS)

    Bushnell, Peter

    1988-01-01

    The aerodynamic pressure distribution was determined on a rotating Prop-Fan blade at the S1-MA wind tunnel facility operated by the Office National D'Etudes et de Recherches Aerospatiale (ONERA) in Modane, France. The pressure distributions were measured at thirteen radial stations on a single rotation Large Scale Advanced Prop-Fan (LAP/SR7) blade, for a sequence of operating conditions including inflow Mach numbers ranging from 0.03 to 0.78. Pressure distributions for more than one power coefficient and/or advanced ratio setting were measured for most of the inflow Mach numbers investigated. Due to facility power limitations the Prop-Fan test installation was a two bladed version of the eight design configuration. The power coefficient range investigated was therefore selected to cover typical power loading per blade conditions which occur within the Prop-Fan operating envelope. The experimental results provide an extensive source of information on the aerodynamic behavior of the swept Prop-Fan blade, including details which were elusive to current computational models and do not appear in the two-dimensional airfoil data.

  9. Spectrophotometric Procedure for Fast Reactor Advanced Coolant Manufacture Control

    NASA Astrophysics Data System (ADS)

    Andrienko, O. S.; Egorov, N. B.; Zherin, I. I.; Indyk, D. V.

    2016-01-01

    The paper describes a spectrophotometric procedure for fast reactor advanced coolant manufacture control. The molar absorption coefficient of dimethyllead dibromide with dithizone was defined as equal to 68864 ± 795 l·mole-1·cm-1, limit of detection as equal to 0.583 · 10-6 g/ml. The spectrophotometric procedure application range was found to be equal to 37.88 - 196.3 g. of dimethyllead dibromide in the sample. The procedure was used within the framework of the development of the method of synthesis of the advanced coolant for fast reactors.

  10. The implementation and control of advanced manufacturing systems

    NASA Astrophysics Data System (ADS)

    Anstiss, P.

    An account is given of the development and control of a flexible manufacturing system for small machined parts which can prepare raw materials for fixturing, assemble all necessary resources, then process 'nests' of components through machining, inspection, and secondary operations to produce finished parts ready for surface treatment or painting. The system employs automated stores, transport and machine tools, local area network communications, advanced computer control systems for all automatic and manual functions, and comprehensive tool storage, handling and preparation facilities.

  11. Needs, opportunities, and options for large scale systems research

    SciTech Connect

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  12. Advanced Manufacturing for a U.S. Clean Energy Economy (Fact Sheet)

    SciTech Connect

    Not Available

    2012-03-01

    This fact sheet is an overview of the U.S. Department of Energy's Advanced Manufacturing Office. Manufacturing is central to our economy, culture, and history. The industrial sector produces 11% of U.S. gross domestic product (GDP), employs 12 million people, and generates 57% of U.S. export value. However, U.S. industry consumes about one-third of all energy produced in the United States, and significant cost-effective energy efficiency and advanced manufacturing opportunities remain unexploited. As a critical component of the National Innovation Policy for Advanced Manufacturing, the U.S. Department of Energy's (DOE's) Advanced Manufacturing Office (AMO) is focused on creating a fertile environment for advanced manufacturing innovation, enabling vigorous domestic development of transformative manufacturing technologies, promoting coordinated public and private investment in precompetitive advanced manufacturing technology infrastructure, and facilitating the rapid scale-up and market penetration of advanced manufacturing technologies.

  13. Large Scale Homing in Honeybees

    PubMed Central

    Pahl, Mario; Zhu, Hong; Tautz, Jürgen; Zhang, Shaowu

    2011-01-01

    Honeybee foragers frequently fly several kilometres to and from vital resources, and communicate those locations to their nest mates by a symbolic dance language. Research has shown that they achieve this feat by memorizing landmarks and the skyline panorama, using the sun and polarized skylight as compasses and by integrating their outbound flight paths. In order to investigate the capacity of the honeybees' homing abilities, we artificially displaced foragers to novel release spots at various distances up to 13 km in the four cardinal directions. Returning bees were individually registered by a radio frequency identification (RFID) system at the hive entrance. We found that homing rate, homing speed and the maximum homing distance depend on the release direction. Bees released in the east were more likely to find their way back home, and returned faster than bees released in any other direction, due to the familiarity of global landmarks seen from the hive. Our findings suggest that such large scale homing is facilitated by global landmarks acting as beacons, and possibly the entire skyline panorama. PMID:21602920

  14. 78 FR 34346 - Proposed Information Collection; Comment Request; NIST MEP Advanced Manufacturing Jobs and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-07

    ... Advanced Manufacturing Jobs and Innovation Accelerator Challenge (AMJIAC) Client Impact Survey AGENCY... information collection. The purpose of the Advanced Manufacturing Jobs and Innovation Accelerator Challenge... to support job creation, encourage economic development, and enhance the competitiveness of...

  15. National Center for Advanced Information Components Manufacturing. Program summary report, Volume 1

    SciTech Connect

    1996-10-01

    The National Center for Advanced Information Components Manufacturing focused on manufacturing research and development for flat panel displays, advanced lithography, microelectronics, and optoelectronics. This report provides an overview of the program, summaries of the technical projects, and key program accomplishments.

  16. Large Scale Nanolaminate Deformable Mirror

    SciTech Connect

    Papavasiliou, A; Olivier, S; Barbee, T; Miles, R; Chang, K

    2005-11-30

    This work concerns the development of a technology that uses Nanolaminate foils to form light-weight, deformable mirrors that are scalable over a wide range of mirror sizes. While MEMS-based deformable mirrors and spatial light modulators have considerably reduced the cost and increased the capabilities of adaptive optic systems, there has not been a way to utilize the advantages of lithography and batch-fabrication to produce large-scale deformable mirrors. This technology is made scalable by using fabrication techniques and lithography that are not limited to the sizes of conventional MEMS devices. Like many MEMS devices, these mirrors use parallel plate electrostatic actuators. This technology replicates that functionality by suspending a horizontal piece of nanolaminate foil over an electrode by electroplated nickel posts. This actuator is attached, with another post, to another nanolaminate foil that acts as the mirror surface. Most MEMS devices are produced with integrated circuit lithography techniques that are capable of very small line widths, but are not scalable to large sizes. This technology is very tolerant of lithography errors and can use coarser, printed circuit board lithography techniques that can be scaled to very large sizes. These mirrors use small, lithographically defined actuators and thin nanolaminate foils allowing them to produce deformations over a large area while minimizing weight. This paper will describe a staged program to develop this technology. First-principles models were developed to determine design parameters. Three stages of fabrication will be described starting with a 3 x 3 device using conventional metal foils and epoxy to a 10-across all-metal device with nanolaminate mirror surfaces.

  17. Large-Scale Information Systems

    SciTech Connect

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  18. A manufacturing database of advanced materials used in spacecraft structures

    NASA Technical Reports Server (NTRS)

    Bao, Han P.

    1994-01-01

    Cost savings opportunities over the life cycle of a product are highest in the early exploratory phase when different design alternatives are evaluated not only for their performance characteristics but also their methods of fabrication which really control the ultimate manufacturing costs of the product. In the past, Design-To-Cost methodologies for spacecraft design concentrated on the sizing and weight issues more than anything else at the early so-called 'Vehicle Level' (Ref: DOD/NASA Advanced Composites Design Guide). Given the impact of manufacturing cost, the objective of this study is to identify the principal cost drivers for each materials technology and propose a quantitative approach to incorporating these cost drivers into the family of optimization tools used by the Vehicle Analysis Branch of NASA LaRC to assess various conceptual vehicle designs. The advanced materials being considered include aluminum-lithium alloys, thermoplastic graphite-polyether etherketone composites, graphite-bismaleimide composites, graphite- polyimide composites, and carbon-carbon composites. Two conventional materials are added to the study to serve as baseline materials against which the other materials are compared. These two conventional materials are aircraft aluminum alloys series 2000 and series 7000, and graphite-epoxy composites T-300/934. The following information is available in the database. For each material type, the mechanical, physical, thermal, and environmental properties are first listed. Next the principal manufacturing processes are described. Whenever possible, guidelines for optimum processing conditions for specific applications are provided. Finally, six categories of cost drivers are discussed. They include, design features affecting processing, tooling, materials, fabrication, joining/assembly, and quality assurance issues. It should be emphasized that this database is not an exhaustive database. Its primary use is to make the vehicle designer

  19. Feature-based tolerancing for advanced manufacturing applications

    SciTech Connect

    Brown, C.W.; Kirk, W.J. III; Simons, W.R.; Ward, R.C.; Brooks, S.L.

    1994-11-01

    A primary requirement for the successful deployment of advanced manufacturing applications is the need for a complete and accessible definition of the product. This product definition must not only provide an unambiguous description of a product`s nominal shape but must also contain complete tolerance specification and general property attributes. Likewise, the product definition`s geometry, topology, tolerance data, and modeler manipulative routines must be fully accessible through a robust application programmer interface. This paper describes a tolerancing capability using features that complements a geometric solid model with a representation of conventional and geometric tolerances and non-shape property attributes. This capability guarantees a complete and unambiguous definition of tolerances for manufacturing applications. An object-oriented analysis and design of the feature-based tolerance domain was performed. The design represents and relates tolerance features, tolerances, and datum reference frames. The design also incorporates operations that verify correctness and check for the completeness of the overall tolerance definition. The checking algorithm is based upon the notion of satisfying all of a feature`s toleranceable aspects. Benefits from the feature-based tolerance modeler include: advancing complete product definition initiatives, incorporating tolerances in product data exchange, and supplying computer-integrated manufacturing applications with tolerance information.

  20. Large scale cryogenic fluid systems testing

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA Lewis Research Center's Cryogenic Fluid Systems Branch (CFSB) within the Space Propulsion Technology Division (SPTD) has the ultimate goal of enabling the long term storage and in-space fueling/resupply operations for spacecraft and reusable vehicles in support of space exploration. Using analytical modeling, ground based testing, and on-orbit experimentation, the CFSB is studying three primary categories of fluid technology: storage, supply, and transfer. The CFSB is also investigating fluid handling, advanced instrumentation, and tank structures and materials. Ground based testing of large-scale systems is done using liquid hydrogen as a test fluid at the Cryogenic Propellant Tank Facility (K-site) at Lewis' Plum Brook Station in Sandusky, Ohio. A general overview of tests involving liquid transfer, thermal control, pressure control, and pressurization is given.

  1. Batteries for Large Scale Energy Storage

    SciTech Connect

    Soloveichik, Grigorii L.

    2011-07-15

    In recent years, with the deployment of renewable energy sources, advances in electrified transportation, and development in smart grids, the markets for large-scale stationary energy storage have grown rapidly. Electrochemical energy storage methods are strong candidate solutions due to their high energy density, flexibility, and scalability. This review provides an overview of mature and emerging technologies for secondary and redox flow batteries. New developments in the chemistry of secondary and flow batteries as well as regenerative fuel cells are also considered. Advantages and disadvantages of current and prospective electrochemical energy storage options are discussed. The most promising technologies in the short term are high-temperature sodium batteries with β”-alumina electrolyte, lithium-ion batteries, and flow batteries. Regenerative fuel cells and lithium metal batteries with high energy density require further research to become practical.

  2. Large-Scale Astrophysical Visualization on Smartphones

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  3. Advances in the manufacturing, types, and applications of biosensors

    NASA Astrophysics Data System (ADS)

    Ravindra, Nuggehalli M.; Prodan, Camelia; Fnu, Shanmugamurthy; Padronl, Ivan; Sikha, Sushil K.

    2007-12-01

    In recent years, there have been significant technological advancements in the manufacturing, types, and applications of biosensors. Applications include clinical and non-clinical diagnostics for home, bio-defense, bio-remediation, environment, agriculture, and the food industry. Biosensors have progressed beyond the detection of biological threats such as anthrax and are finding use in a number of non-biological applications. Emerging biosensor technologies such as lab-on-a-chip have revolutionized the integration approaches for a very flexible, innovative, and user-friendly platform. An overview of the fundamentals, types, applications, and manufacturers, as well as the market trends of biosensors is presented here. Two case studies are discussed: one focused on a characterization technique—patch clamping and dielectric spectroscopy as a biological sensor—and the other about lithium phthalocyanine, a material that is being developed for in-vivo oxymetry.

  4. Advanced manufacturing technologies for the BeCOAT telescope

    NASA Astrophysics Data System (ADS)

    Sweeney, Michael N.; Rajic, Slobodan; Seals, Roland D.

    1994-02-01

    The beryllium cryogenic off-axis telescope (BeCOAT) uses a two-mirror, non re-imaging, off- axis, Ritchey Chretian design with all-beryllium optics, structures and baffles. The purpose of this telescope is the system level demonstration of advanced manufacturing technologies for optics, optical benches, and baffle assemblies. The key issues that are addressed are single point diamond turning of beryllium optics, survivable fastening techniques, minimum beryllium utilization, and technologies leading to self-aligning, all-beryllium optical systems.

  5. Advanced computational research in materials processing for design and manufacturing

    SciTech Connect

    Zacharia, T.

    1994-12-31

    The computational requirements for design and manufacture of automotive components have seen dramatic increases for producing automobiles with three times the mileage. Automotive component design systems are becoming increasingly reliant on structural analysis requiring both overall larger analysis and more complex analyses, more three-dimensional analyses, larger model sizes, and routine consideration of transient and non-linear effects. Such analyses must be performed rapidly to minimize delays in the design and development process, which drives the need for parallel computing. This paper briefly describes advanced computational research in superplastic forming and automotive crash worthiness.

  6. Survey of decentralized control methods. [for large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1975-01-01

    An overview is presented of the types of problems that are being considered by control theorists in the area of dynamic large scale systems with emphasis on decentralized control strategies. Approaches that deal directly with decentralized decision making for large scale systems are discussed. It is shown that future advances in decentralized system theory are intimately connected with advances in the stochastic control problem with nonclassical information pattern. The basic assumptions and mathematical tools associated with the latter are summarized, and recommendations concerning future research are presented.

  7. Advanced Manufacturing Processes Laboratory Building 878 hazards assessment document

    SciTech Connect

    Wood, C.; Thornton, W.; Swihart, A.; Gilman, T.

    1994-07-01

    The introduction of the hazards assessment process is to document the impact of the release of hazards at the Advanced Manufacturing Processes Laboratory (AMPL) that are significant enough to warrant consideration in Sandia National Laboratories` operational emergency management program. This hazards assessment is prepared in accordance with the Department of Energy Order 5500.3A requirement that facility-specific hazards assessments be prepared, maintained, and used for emergency planning purposes. This hazards assessment provides an analysis of the potential airborne release of chemicals associated with the operations and processes at the AMPL. This research and development laboratory develops advanced manufacturing technologies, practices, and unique equipment and provides the fabrication of prototype hardware to meet the needs of Sandia National Laboratories, Albuquerque, New Mexico (SNL/NM). The focus of the hazards assessment is the airborne release of materials because this requires the most rapid, coordinated emergency response on the part of the AMPL, SNL/NM, collocated facilities, and surrounding jurisdiction to protect workers, the public, and the environment.

  8. Influence of Manufacturing Processes and Microstructures on the Performance and Manufacturability of Advanced High Strength Steels

    SciTech Connect

    Choi, Kyoo Sil; Liu, Wenning N.; Sun, Xin; Khaleel, Mohammad A.

    2009-10-01

    Advanced high strength steels (AHSS) are performance-based steel grades and their global material properties can be achieved with various steel chemistries and manufacturing processes, leading to various microstructures. In this paper, we investigate the influence of supplier variation and resulting microstructure difference on the overall mechanical properties as well as local formability behaviors of advanced high strength steels (AHSS). For this purpose, we first examined the basic material properties and the transformation kinetics of TRansformation Induced Plasticity (TRIP) 800 steels from three different suppliers under different testing temperatures. The experimental results show that there is a significant supplier (i.e., manufacturing process) dependency of the TRIP 800 steel mechanical and microstructure properties. Next, we examined the local formability of two commercial Dual Phase (DP) 980 steels during stamping process. The two commercial DP 980 steels also exhibit noticeably different formability during stamping process in the sense that one of them shows severe tendency for shear fracture. Microstructure-based finite element analyses are carried out next to simulate the localized deformation process with the two DP 980 microstructures, and the results suggest that the possible reason for the difference in formability lies in the morphology of the hard martensite phase in the DP microstructure.

  9. 10 CFR 611.202 - Advanced Technology Vehicle Manufacturing Facility Award Program.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Vehicle Manufacturing Facility Award Program. DOE may issue, under the Advanced Technology Vehicle Manufacturing Facility Award Program, 10 CFR part 611, subpart C, awards for eligible projects. ... 10 Energy 4 2014-01-01 2014-01-01 false Advanced Technology Vehicle Manufacturing Facility...

  10. 10 CFR 611.202 - Advanced Technology Vehicle Manufacturing Facility Award Program.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Vehicle Manufacturing Facility Award Program. DOE may issue, under the Advanced Technology Vehicle Manufacturing Facility Award Program, 10 CFR part 611, subpart C, awards for eligible projects. ... 10 Energy 4 2011-01-01 2011-01-01 false Advanced Technology Vehicle Manufacturing Facility...

  11. 10 CFR 611.202 - Advanced Technology Vehicle Manufacturing Facility Award Program.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Vehicle Manufacturing Facility Award Program. DOE may issue, under the Advanced Technology Vehicle Manufacturing Facility Award Program, 10 CFR part 611, subpart C, awards for eligible projects. ... 10 Energy 4 2013-01-01 2013-01-01 false Advanced Technology Vehicle Manufacturing Facility...

  12. 10 CFR 611.202 - Advanced Technology Vehicle Manufacturing Facility Award Program.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Vehicle Manufacturing Facility Award Program. DOE may issue, under the Advanced Technology Vehicle Manufacturing Facility Award Program, 10 CFR part 611, subpart C, awards for eligible projects. ... 10 Energy 4 2012-01-01 2012-01-01 false Advanced Technology Vehicle Manufacturing Facility...

  13. 10 CFR 611.202 - Advanced Technology Vehicle Manufacturing Facility Award Program.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Manufacturing Facility Award Program, 10 CFR part 611, subpart C, awards for eligible projects. ... 10 Energy 4 2010-01-01 2010-01-01 false Advanced Technology Vehicle Manufacturing Facility Award... TECHNOLOGY VEHICLES MANUFACTURER ASSISTANCE PROGRAM Facility/Funding Awards § 611.202 Advanced...

  14. Lessons from Large-Scale Renewable Energy Integration Studies: Preprint

    SciTech Connect

    Bird, L.; Milligan, M.

    2012-06-01

    In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

  15. Advanced manufacturing by spray forming: Aluminum strip and microelectromechanical systems

    SciTech Connect

    McHugh, K.M.

    1994-12-31

    Spray forming is an advanced materials processing technology that converts a bulk liquid metal to a near-net-shape solid by depositing atomized droplets onto a suitably shaped substrate. By combining rapid solidification processing with product shape control, spray forming can reduce manufacturing costs while improving product quality. INEL is developing a unique spray-forming method based on de Laval (converging/diverging) nozzle designs to produce near-net-shape solids and coatings of metals, polymers, and composite materials. Properties of the spray-formed material are tailored by controlling the characteristics of the spray plume and substrate. Two examples are described: high-volume production of aluminum alloy strip, and the replication of micron-scale features in micropatterned polymers during the production of microelectromechanical systems.

  16. Precision manufacturing using advanced optical interference lithography. Final report

    SciTech Connect

    Britten, J.A.; Hawryluk, A.M.

    1997-04-03

    Goal was to develop interference lithography (IL) as a reliable process for patterning large-area, deep-submicron scale field emission arrays for field emission display (FED) applications. We have developed a system based on IL which can easily produce an array of 0.2-0.5 micron emitters over large area (up to 400 sq. in. to date) with better than 5% height and spacing uniformity. Process development as a result of this LDRD project represents a significant advance over the current state of the art for FED manufacturing and is applicable to all types of FEDs, independent of the emitter material. Ability of IL to pattern such structures simultaneously and uniformly on a large format has application to other technology areas, such as dynamic random access memory (DRAM) production and magnetic media recording.

  17. Large-Scale Reform Comes of Age

    ERIC Educational Resources Information Center

    Fullan, Michael

    2009-01-01

    This article reviews the history of large-scale education reform and makes the case that large-scale or whole system reform policies and strategies are becoming increasingly evident. The review briefly addresses the pre 1997 period concluding that while the pressure for reform was mounting that there were very few examples of deliberate or…

  18. Large-scale infrared scene projectors

    NASA Astrophysics Data System (ADS)

    Murray, Darin A.

    1999-07-01

    Large-scale infrared scene projectors, typically have unique opto-mechanical characteristics associated to their application. This paper outlines two large-scale zoom lens assemblies with different environmental and package constraints. Various challenges and their respective solutions are discussed and presented.

  19. Developing novel 3D antennas using advanced additive manufacturing technology

    NASA Astrophysics Data System (ADS)

    Mirzaee, Milad

    In today's world of wireless communication systems, antenna engineering is rapidly advancing as the wireless services continue to expand in support of emerging commercial applications. Antennas play a key role in the performance of advanced transceiver systems where they serve to convert electric power to electromagnetic waves and vice versa. Researchers have held significant interest in developing this crucial component for wireless communication systems by employing a variety of design techniques. In the past few years, demands for electrically small antennas continues to increase, particularly among portable and mobile wireless devices, medical electronics and aerospace systems. This trend toward smaller electronic devices makes the three dimensional (3D) antennas very appealing, since they can be designed in a way to use every available space inside the devise. Additive Manufacturing (AM) method could help to find great solutions for the antennas design for next generation of wireless communication systems. In this thesis, the design and fabrication of 3D printed antennas using AM technology is studied. To demonstrate this application of AM, different types of antennas structures have been designed and fabricated using various manufacturing processes. This thesis studies, for the first time, embedded conductive 3D printed antennas using PolyLactic Acid (PLA) and Acrylonitrile Butadiene Styrene (ABS) for substrate parts and high temperature carbon paste for conductive parts which can be a good candidate to overcome the limitations of direct printing on 3D surfaces that is the most popular method to fabricate conductive parts of the antennas. This thesis also studies, for the first time, the fabrication of antennas with 3D printed conductive parts which can contribute to the new generation of 3D printed antennas.

  20. Large-scale wind turbine structures

    NASA Technical Reports Server (NTRS)

    Spera, David A.

    1988-01-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  1. Large-scale wind turbine structures

    NASA Astrophysics Data System (ADS)

    Spera, David A.

    1988-05-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  2. Large-Scale Statistics for Cu Electromigration

    NASA Astrophysics Data System (ADS)

    Hauschildt, M.; Gall, M.; Hernandez, R.

    2009-06-01

    Even after the successful introduction of Cu-based metallization, the electromigration failure risk has remained one of the important reliability concerns for advanced process technologies. The observation of strong bimodality for the electron up-flow direction in dual-inlaid Cu interconnects has added complexity, but is now widely accepted. The failure voids can occur both within the via ("early" mode) or within the trench ("late" mode). More recently, bimodality has been reported also in down-flow electromigration, leading to very short lifetimes due to small, slit-shaped voids under vias. For a more thorough investigation of these early failure phenomena, specific test structures were designed based on the Wheatstone Bridge technique. The use of these structures enabled an increase of the tested sample size close to 675000, allowing a direct analysis of electromigration failure mechanisms at the single-digit ppm regime. Results indicate that down-flow electromigration exhibits bimodality at very small percentage levels, not readily identifiable with standard testing methods. The activation energy for the down-flow early failure mechanism was determined to be 0.83±0.02 eV. Within the small error bounds of this large-scale statistical experiment, this value is deemed to be significantly lower than the usually reported activation energy of 0.90 eV for electromigration-induced diffusion along Cu/SiCN interfaces. Due to the advantages of the Wheatstone Bridge technique, we were also able to expand the experimental temperature range down to 150° C, coming quite close to typical operating conditions up to 125° C. As a result of the lowered activation energy, we conclude that the down-flow early failure mode may control the chip lifetime at operating conditions. The slit-like character of the early failure void morphology also raises concerns about the validity of the Blech-effect for this mechanism. A very small amount of Cu depletion may cause failure even before a

  3. Synthesis of small and large scale dynamos

    NASA Astrophysics Data System (ADS)

    Subramanian, Kandaswamy

    Using a closure model for the evolution of magnetic correlations, we uncover an interesting plausible saturated state of the small-scale fluctuation dynamo (SSD) and a novel analogy between quantum mechanical tunnelling and the generation of large-scale fields. Large scale fields develop via the α-effect, but as magnetic helicity can only change on a resistive timescale, the time it takes to organize the field into large scales increases with magnetic Reynolds number. This is very similar to the results which obtain from simulations using the full MHD equations.

  4. Advanced composites structural concepts and materials technologies for primary aircraft structures: Design/manufacturing concept assessment

    NASA Technical Reports Server (NTRS)

    Chu, Robert L.; Bayha, Tom D.; Davis, HU; Ingram, J. ED; Shukla, Jay G.

    1992-01-01

    Composite Wing and Fuselage Structural Design/Manufacturing Concepts have been developed and evaluated. Trade studies were performed to determine how well the concepts satisfy the program goals of 25 percent cost savings, 40 percent weight savings with aircraft resizing, and 50 percent part count reduction as compared to the aluminum Lockheed L-1011 baseline. The concepts developed using emerging technologies such as large scale resin transfer molding (RTM), automatic tow placed (ATP), braiding, out-of-autoclave and automated manufacturing processes for both thermoset and thermoplastic materials were evaluated for possible application in the design concepts. Trade studies were used to determine which concepts carry into the detailed design development subtask.

  5. Large-scale regions of antimatter

    SciTech Connect

    Grobov, A. V. Rubin, S. G.

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  6. Decomposition and coordination of large-scale operations optimization

    NASA Astrophysics Data System (ADS)

    Cheng, Ruoyu

    Nowadays, highly integrated manufacturing has resulted in more and more large-scale industrial operations. As one of the most effective strategies to ensure high-level operations in modern industry, large-scale engineering optimization has garnered a great amount of interest from academic scholars and industrial practitioners. Large-scale optimization problems frequently occur in industrial applications, and many of them naturally present special structure or can be transformed to taking special structure. Some decomposition and coordination methods have the potential to solve these problems at a reasonable speed. This thesis focuses on three classes of large-scale optimization problems: linear programming, quadratic programming, and mixed-integer programming problems. The main contributions include the design of structural complexity analysis for investigating scaling behavior and computational efficiency of decomposition strategies, novel coordination techniques and algorithms to improve the convergence behavior of decomposition and coordination methods, as well as the development of a decentralized optimization framework which embeds the decomposition strategies in a distributed computing environment. The complexity study can provide fundamental guidelines to practical applications of the decomposition and coordination methods. In this thesis, several case studies imply the viability of the proposed decentralized optimization techniques for real industrial applications. A pulp mill benchmark problem is used to investigate the applicability of the LP/QP decentralized optimization strategies, while a truck allocation problem in the decision support of mining operations is used to study the MILP decentralized optimization strategies.

  7. Prosperity Game: Advanced Manufacturing Day, May 17, 1994

    SciTech Connect

    Berman, M.

    1994-12-01

    Prosperity Games are an outgrowth and adaptation of move/countermove and seminar War Games. Prosperity Games are simulations that explore complex issues in a variety of areas including economics, politics, sociology, environment, education and research. These issues can be examined from a variety of perspectives ranging from a global, macroeconomic and geopolitical viewpoint down to the details of customer/supplier/market interactions in specific industries. All Prosperity Games are unique in that both the game format and the player contributions vary from game to game. This report documents a 90-minute Prosperity Game conducted as part of Advanced Manufacturing Day on May 17, 1994. This was the fourth game conducted under the direction of the Center for National Industrial Alliances at Sandia. Although previous games lasted from one to two days, this abbreviated game produced interesting and important results. Most of the strategies proposed in previous games were reiterated here. These included policy changes in international trade, tax laws, the legal system, and the educational system. Government support of new technologies was encouraged as well as government-industry partnerships. The importance of language in international trade was an original contribution of this game. The deliberations and recommendations of these teams provide valuable insights as to the views of this diverse group of decision makers concerning policy changes, foreign competition, and the development, delivery and commercialization of new technologies.

  8. Impacts of advanced manufacturing technology on parametric estimating

    NASA Astrophysics Data System (ADS)

    Hough, Paul G.

    1989-12-01

    The introduction of advanced manufacturing technology in the aerospace industry poses serious challenges for government cost analysts. Traditionally, the analysts have relied on parametric estimating techniques for both planning and budgeting. Despite its problems, this approach has proven to be a remarkably useful and robust tool for estimating new weapon system costs. However, rapid improvements in both product and process technology could exacerbate current difficulties, and diminish the utility of the parametric approach. This paper reviews some weakness associated with parametrics, then proceeds to examine how specific aspects of the factory of the future may further impact parametric estimating, and suggests avenues of research for their resolution. This paper is an extended version of Cost Estimating for the Factory of the Future. Parametric estimating is a method by which aggregated costs are derived as a function of high-level product characteristics or parameters. The resulting equations are known as cost estimating relationships (CERs). Such equations are particularly useful when detailed technical specifications are not available.

  9. Towards manufacturing of advanced logic devices by double-patterning

    NASA Astrophysics Data System (ADS)

    Koay, Chiew-seng; Halle, Scott; Holmes, Steven; Petrillo, Karen; Colburn, Matthew; van Dommelen, Youri; Jiang, Aiqin; Crouse, Michael; Dunn, Shannon; Hetzer, David; Kawakami, Shinichiro; Cantone, Jason; Huli, Lior; Rodgers, Martin; Martinick, Brian

    2011-04-01

    As reported previously, the IBM Alliance has established a DETO (Double-Expose-Track-Optimized) baseline, in collaboration with ASML, TEL, and CNSE, to evaluate commercially available DETO photoresist system for the manufacturing of advanced logic devices. Although EUV lithography is the baseline strategy for <2x nm logic nodes, alternative techniques are still being pursued. The DETO technique produces pitch-split patterns capable of supporting 16 nm and 11 nm node semiconductor devices. We present the long-term monitoring performances of CD uniformity (CDU), overlay, and defectivity of our DETO process. CDU and overlay performances for controlled experiments are also presented. Two alignment schemes in DETO are compared experimentally for their effects on inter-level & intralevel overlays, and space CDU. We also experimented with methods for improving CDU, in which the CD-OptimizerTMand DoseMapperTM were evaluated separately and in tandem. Overlay improvements using the Correction Per Exposure (CPE) and the intra-field High-Order Process Correction (i-HOPC) were compared against the usual linear correction method. The effects of the exposure field size are also compared between a small field and the full field. Included in all the above, we also compare the performances derived from stack-integrated wafers and bare-Si wafers.

  10. Large Scale Flame Spread Environmental Characterization Testing

    NASA Technical Reports Server (NTRS)

    Clayman, Lauren K.; Olson, Sandra L.; Gokoghi, Suleyman A.; Brooker, John E.; Ferkul, Paul V.; Kacher, Henry F.

    2013-01-01

    Under the Advanced Exploration Systems (AES) Spacecraft Fire Safety Demonstration Project (SFSDP), as a risk mitigation activity in support of the development of a large-scale fire demonstration experiment in microgravity, flame-spread tests were conducted in normal gravity on thin, cellulose-based fuels in a sealed chamber. The primary objective of the tests was to measure pressure rise in a chamber as sample material, burning direction (upward/downward), total heat release, heat release rate, and heat loss mechanisms were varied between tests. A Design of Experiments (DOE) method was imposed to produce an array of tests from a fixed set of constraints and a coupled response model was developed. Supplementary tests were run without experimental design to additionally vary select parameters such as initial chamber pressure. The starting chamber pressure for each test was set below atmospheric to prevent chamber overpressure. Bottom ignition, or upward propagating burns, produced rapid acceleratory turbulent flame spread. Pressure rise in the chamber increases as the amount of fuel burned increases mainly because of the larger amount of heat generation and, to a much smaller extent, due to the increase in gaseous number of moles. Top ignition, or downward propagating burns, produced a steady flame spread with a very small flat flame across the burning edge. Steady-state pressure is achieved during downward flame spread as the pressure rises and plateaus. This indicates that the heat generation by the flame matches the heat loss to surroundings during the longer, slower downward burns. One heat loss mechanism included mounting a heat exchanger directly above the burning sample in the path of the plume to act as a heat sink and more efficiently dissipate the heat due to the combustion event. This proved an effective means for chamber overpressure mitigation for those tests producing the most total heat release and thusly was determined to be a feasible mitigation

  11. Advanced Manufacturing Technologies (AMT): Additive Manufactured Hot Fire Planning and Testing in GRC Cell 32 Project

    NASA Technical Reports Server (NTRS)

    Fikes, John C.

    2014-01-01

    The objective of this project is to hot fire test an additively manufactured thrust chamber assembly TCA (injector and thrust chamber). GRC will install the additively manufactured Inconel 625 injector, two additively manufactured (SLM) water cooled Cu-Cr thrust chamber barrels and one additively manufactured (SLM) water cooled Cu-Cr thrust chamber nozzle on the test stand in Cell 32 and perform hot fire testing of the integrated TCA.

  12. National Center for Advanced Information Components Manufacturing. Program summary report, Volume II

    SciTech Connect

    1996-10-01

    The National Center for Advanced Information Components Manufacturing focused on manufacturing research and development for flat panel displays, advanced lithography, microelectronics, and optoelectronics. This report provides an overview of the program, program history, summaries of the technical projects, and key program accomplishments.

  13. Large-scale inhomogeneities and galaxy statistics

    NASA Technical Reports Server (NTRS)

    Schaeffer, R.; Silk, J.

    1984-01-01

    The density fluctuations associated with the formation of large-scale cosmic pancake-like and filamentary structures are evaluated using the Zel'dovich approximation for the evolution of nonlinear inhomogeneities in the expanding universe. It is shown that the large-scale nonlinear density fluctuations in the galaxy distribution due to pancakes modify the standard scale-invariant correlation function xi(r) at scales comparable to the coherence length of adiabatic fluctuations. The typical contribution of pancakes and filaments to the J3 integral, and more generally to the moments of galaxy counts in a volume of approximately (15-40 per h Mpc)exp 3, provides a statistical test for the existence of large scale inhomogeneities. An application to several recent three dimensional data sets shows that despite large observational uncertainties over the relevant scales characteristic features may be present that can be attributed to pancakes in most, but not all, of the various galaxy samples.

  14. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  15. The large-scale distribution of galaxies

    NASA Technical Reports Server (NTRS)

    Geller, Margaret J.

    1989-01-01

    The spatial distribution of galaxies in the universe is characterized on the basis of the six completed strips of the Harvard-Smithsonian Center for Astrophysics redshift-survey extension. The design of the survey is briefly reviewed, and the results are presented graphically. Vast low-density voids similar to the void in Bootes are found, almost completely surrounded by thin sheets of galaxies. Also discussed are the implications of the results for the survey sampling problem, the two-point correlation function of the galaxy distribution, the possibility of detecting large-scale coherent flows, theoretical models of large-scale structure, and the identification of groups and clusters of galaxies.

  16. Training Welders in Advanced Manufacturing Philosophies Nets Employability

    ERIC Educational Resources Information Center

    Wilson, Kristin

    2011-01-01

    As of September 2010, the U.S. manufacturing sector grew for the 14th consecutive month, leading some economists to speculate that, as with the Great Depression, American manufacturing will lead the economy out of the recession. It is a little bit of good news in a long stream of depressing employment reports. Career and technical educators…

  17. Multiscale and Multiphysics Modeling of Additive Manufacturing of Advanced Materials

    NASA Technical Reports Server (NTRS)

    Liou, Frank; Newkirk, Joseph; Fan, Zhiqiang; Sparks, Todd; Chen, Xueyang; Fletcher, Kenneth; Zhang, Jingwei; Zhang, Yunlu; Kumar, Kannan Suresh; Karnati, Sreekar

    2015-01-01

    The objective of this proposed project is to research and develop a prediction tool for advanced additive manufacturing (AAM) processes for advanced materials and develop experimental methods to provide fundamental properties and establish validation data. Aircraft structures and engines demand materials that are stronger, useable at much higher temperatures, provide less acoustic transmission, and enable more aeroelastic tailoring than those currently used. Significant improvements in properties can only be achieved by processing the materials under nonequilibrium conditions, such as AAM processes. AAM processes encompass a class of processes that use a focused heat source to create a melt pool on a substrate. Examples include Electron Beam Freeform Fabrication and Direct Metal Deposition. These types of additive processes enable fabrication of parts directly from CAD drawings. To achieve the desired material properties and geometries of the final structure, assessing the impact of process parameters and predicting optimized conditions with numerical modeling as an effective prediction tool is necessary. The targets for the processing are multiple and at different spatial scales, and the physical phenomena associated occur in multiphysics and multiscale. In this project, the research work has been developed to model AAM processes in a multiscale and multiphysics approach. A macroscale model was developed to investigate the residual stresses and distortion in AAM processes. A sequentially coupled, thermomechanical, finite element model was developed and validated experimentally. The results showed the temperature distribution, residual stress, and deformation within the formed deposits and substrates. A mesoscale model was developed to include heat transfer, phase change with mushy zone, incompressible free surface flow, solute redistribution, and surface tension. Because of excessive computing time needed, a parallel computing approach was also tested. In addition

  18. Large Scale Commodity Clusters for Lattice QCD

    SciTech Connect

    A. Pochinsky; W. Akers; R. Brower; J. Chen; P. Dreher; R. Edwards; S. Gottlieb; D. Holmgren; P. Mackenzie; J. Negele; D. Richards; J. Simone; W. Watson

    2002-06-01

    We describe the construction of large scale clusters for lattice QCD computing being developed under the umbrella of the U.S. DoE SciDAC initiative. We discuss the study of floating point and network performance that drove the design of the cluster, and present our plans for future multi-Terascale facilities.

  19. Management of large-scale technology

    NASA Technical Reports Server (NTRS)

    Levine, A.

    1985-01-01

    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  20. A Large Scale Computer Terminal Output Controller.

    ERIC Educational Resources Information Center

    Tucker, Paul Thomas

    This paper describes the design and implementation of a large scale computer terminal output controller which supervises the transfer of information from a Control Data 6400 Computer to a PLATO IV data network. It discusses the cost considerations leading to the selection of educational television channels rather than telephone lines for…

  1. Large-scale CFB combustion demonstration project

    SciTech Connect

    Nielsen, P.T.; Hebb, J.L.; Aquino, R.

    1998-07-01

    The Jacksonville Electric Authority's large-scale CFB demonstration project is described. Given the early stage of project development, the paper focuses on the project organizational structure, its role within the Department of Energy's Clean Coal Technology Demonstration Program, and the projected environmental performance. A description of the CFB combustion process in included.

  2. Large-scale CFB combustion demonstration project

    SciTech Connect

    Nielsen, P.T.; Hebb, J.L.; Aquino, R.

    1998-04-01

    The Jacksonville Electric Authority`s large-scale CFB demonstration project is described. Given the early stage of project development, the paper focuses on the project organizational structure, its role within the Department of Energy`s Clean Coal Technology Demonstration Program, and the projected environmental performance. A description of the CFB combustion process is included.

  3. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  4. ARPACK: Solving large scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Lehoucq, Rich; Maschhoff, Kristi; Sorensen, Danny; Yang, Chao

    2013-11-01

    ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems. The package is designed to compute a few eigenvalues and corresponding eigenvectors of a general n by n matrix A. It is most appropriate for large sparse or structured matrices A where structured means that a matrix-vector product w

  5. Space Technology Mission Directorate Game Changing Development Program FY2015 Annual Program Review: Advanced Manufacturing Technology

    NASA Technical Reports Server (NTRS)

    Vickers, John; Fikes, John

    2015-01-01

    The Advance Manufacturing Technology (AMT) Project supports multiple activities within the Administration's National Manufacturing Initiative. A key component of the Initiative is the Advanced Manufacturing National Program Office (AMNPO), which includes participation from all federal agencies involved in U.S. manufacturing. In support of the AMNPO the AMT Project supports building and Growing the National Network for Manufacturing Innovation through a public-private partnership designed to help the industrial community accelerate manufacturing innovation. Integration with other projects/programs and partnerships: STMD (Space Technology Mission Directorate), HEOMD, other Centers; Industry, Academia; OGA's (e.g., DOD, DOE, DOC, USDA, NASA, NSF); Office of Science and Technology Policy, NIST Advanced Manufacturing Program Office; Generate insight within NASA and cross-agency for technology development priorities and investments. Technology Infusion Plan: PC; Potential customer infusion (TDM, HEOMD, SMD, OGA, Industry); Leverage; Collaborate with other Agencies, Industry and Academia; NASA roadmap. Initiatives include: Advanced Near Net Shape Technology Integrally Stiffened Cylinder Process Development (launch vehicles, sounding rockets); Materials Genome; Low Cost Upper Stage-Class Propulsion; Additive Construction with Mobile Emplacement (ACME); National Center for Advanced Manufacturing.

  6. Advanced modeling and simulation to design and manufacture high performance and reliable advanced microelectronics and microsystems.

    SciTech Connect

    Nettleship, Ian (University of Pittsburgh, Pittsburgh, PA); Hinklin, Thomas; Holcomb, David Joseph; Tandon, Rajan; Arguello, Jose Guadalupe, Jr.; Dempsey, James Franklin; Ewsuk, Kevin Gregory; Neilsen, Michael K.; Lanagan, Michael (Pennsylvania State University, University Park, PA)

    2007-07-01

    An interdisciplinary team of scientists and engineers having broad expertise in materials processing and properties, materials characterization, and computational mechanics was assembled to develop science-based modeling/simulation technology to design and reproducibly manufacture high performance and reliable, complex microelectronics and microsystems. The team's efforts focused on defining and developing a science-based infrastructure to enable predictive compaction, sintering, stress, and thermomechanical modeling in ''real systems'', including: (1) developing techniques to and determining materials properties and constitutive behavior required for modeling; (2) developing new, improved/updated models and modeling capabilities, (3) ensuring that models are representative of the physical phenomena being simulated; and (4) assessing existing modeling capabilities to identify advances necessary to facilitate the practical application of Sandia's predictive modeling technology.

  7. Fractals and cosmological large-scale structure

    NASA Technical Reports Server (NTRS)

    Luo, Xiaochun; Schramm, David N.

    1992-01-01

    Observations of galaxy-galaxy and cluster-cluster correlations as well as other large-scale structure can be fit with a 'limited' fractal with dimension D of about 1.2. This is not a 'pure' fractal out to the horizon: the distribution shifts from power law to random behavior at some large scale. If the observed patterns and structures are formed through an aggregation growth process, the fractal dimension D can serve as an interesting constraint on the properties of the stochastic motion responsible for limiting the fractal structure. In particular, it is found that the observed fractal should have grown from two-dimensional sheetlike objects such as pancakes, domain walls, or string wakes. This result is generic and does not depend on the details of the growth process.

  8. Condition Monitoring of Large-Scale Facilities

    NASA Technical Reports Server (NTRS)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  9. The ergonomics of computer aided design within advanced manufacturing technology.

    PubMed

    John, P A

    1988-03-01

    Many manufacturing companies have now awakened to the significance of computer aided design (CAD), although the majority of them have only been able to purchase computerised draughting systems of which only a subset produce direct manufacturing data. Such companies are moving steadily towards the concept of computer integrated manufacture (CIM), and this demands CAD to address more than draughting. CAD architects are thus having to rethink the basic specification of such systems, although they typically suffer from an insufficient understanding of the design task and have consequently been working with inadequate specifications. It is at this fundamental level that ergonomics has much to offer, making its contribution by encouraging user-centred design. The discussion considers the relationships between CAD and: the design task; the organisation and people; creativity; and artificial intelligence. It finishes with a summary of the contribution of ergonomics. PMID:15676646

  10. Transfer of advanced manufacturing technologies to eastern Kentucky industries

    SciTech Connect

    Gillies, J.A.; Kruzich, R.

    1988-05-01

    This study concludes that there are opportunities to provide assistance in the adoption of manufacturing technologies for small- and medium-sized firms in eastern Kentucky. However, the new markets created by Toyota are not adequate to justify a directed technology transfer program targeting the auto supply industry in eastern Kentucky because supplier markets have been determined for some time, and manufacturers in eastern Kentucky were not competitive in this early selection process. The results of the study strongly reinforce a reorientation of state business-assistance programs. The study also concludes that the quality and quantity of available labor is a pervasive problem in eastern Kentucky and has particular relevance as the economy changes. The study also investigated what type of technology-transfer programs would be appropriate to assist manufacturing firms in eastern Kentucky and if there were a critical number of firms to make such a program feasible.

  11. Cost analysis of advanced turbine blade manufacturing processes

    NASA Technical Reports Server (NTRS)

    Barth, C. F.; Blake, D. E.; Stelson, T. S.

    1977-01-01

    A rigorous analysis was conducted to estimate relative manufacturing costs for high technology gas turbine blades prepared by three candidate materials process systems. The manufacturing costs for the same turbine blade configuration of directionally solidified eutectic alloy, an oxide dispersion strengthened superalloy, and a fiber reinforced superalloy were compared on a relative basis to the costs of the same blade currently in production utilizing the directional solidification process. An analytical process cost model was developed to quantitatively perform the cost comparisons. The impact of individual process yield factors on costs was also assessed as well as effects of process parameters, raw materials, labor rates and consumable items.

  12. Impact of Parallel Computing on Large Scale Aeroelastic Computations

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    Aeroelasticity is computationally one of the most intensive fields in aerospace engineering. Though over the last three decades the computational speed of supercomputers have substantially increased, they are still inadequate for large scale aeroelastic computations using high fidelity flow and structural equations. In addition to reaching a saturation in computational speed because of changes in economics, computer manufactures are stopping the manufacturing of mainframe type supercomputers. This has led computational aeroelasticians to face the gigantic task of finding alternate approaches for fulfilling their needs. The alternate path to over come speed and availability limitations of mainframe type supercomputers is to use parallel computers. During this decade several different architectures have evolved. In FY92 the US Government started the High Performance Computing and Communication (HPCC) program. As a participant in this program NASA developed several parallel computational tools for aeroelastic applications. This talk describes the impact of those application tools on high fidelity based multidisciplinary analysis.

  13. Large scale processes in the solar nebula.

    NASA Astrophysics Data System (ADS)

    Boss, A. P.

    Most proposed chondrule formation mechanisms involve processes occurring inside the solar nebula, so the large scale (roughly 1 to 10 AU) structure of the nebula is of general interest for any chrondrule-forming mechanism. Chondrules and Ca, Al-rich inclusions (CAIs) might also have been formed as a direct result of the large scale structure of the nebula, such as passage of material through high temperature regions. While recent nebula models do predict the existence of relatively hot regions, the maximum temperatures in the inner planet region may not be high enough to account for chondrule or CAI thermal processing, unless the disk mass is considerably greater than the minimum mass necessary to restore the planets to solar composition. Furthermore, it does not seem to be possible to achieve both rapid heating and rapid cooling of grain assemblages in such a large scale furnace. However, if the accretion flow onto the nebula surface is clumpy, as suggested by observations of variability in young stars, then clump-disk impacts might be energetic enough to launch shock waves which could propagate through the nebula to the midplane, thermally processing any grain aggregates they encounter, and leaving behind a trail of chondrules.

  14. Large-scale extraction of proteins.

    PubMed

    Cunha, Teresa; Aires-Barros, Raquel

    2002-01-01

    The production of foreign proteins using selected host with the necessary posttranslational modifications is one of the key successes in modern biotechnology. This methodology allows the industrial production of proteins that otherwise are produced in small quantities. However, the separation and purification of these proteins from the fermentation media constitutes a major bottleneck for the widespread commercialization of recombinant proteins. The major production costs (50-90%) for typical biological product resides in the purification strategy. There is a need for efficient, effective, and economic large-scale bioseparation techniques, to achieve high purity and high recovery, while maintaining the biological activity of the molecule. Aqueous two-phase systems (ATPS) allow process integration as simultaneously separation and concentration of the target protein is achieved, with posterior removal and recycle of the polymer. The ease of scale-up combined with the high partition coefficients obtained allow its potential application in large-scale downstream processing of proteins produced by fermentation. The equipment and the methodology for aqueous two-phase extraction of proteins on a large scale using mixer-settlerand column contractors are described. The operation of the columns, either stagewise or differential, are summarized. A brief description of the methods used to account for mass transfer coefficients, hydrodynamics parameters of hold-up, drop size, and velocity, back mixing in the phases, and flooding performance, required for column design, is also provided. PMID:11876297

  15. Regional Advanced Manufacturing Academy: An Agent of Change

    ERIC Educational Resources Information Center

    Schmeling, Daniel M.; Rose, Kevin

    2010-01-01

    Three Northeast Texas community colleges put aside service delivery areas and matters of "turf" to create Centers of Excellence that provided training throughout a nine county area. This consortium; along with 14 manufacturers, seven economic development corporations, and the regional workforce board, led the change in training a highly skilled…

  16. Innovation Training within the Australian Advanced Manufacturing Industry

    ERIC Educational Resources Information Center

    Donovan, Jerome Denis; Maritz, Alex; McLellan, Andrew

    2013-01-01

    Innovation has emerged as a core driver for the future profitability and success of the manufacturing sector, and increasingly both governments and the private sector are examining ways to support the development of innovation capabilities within organisations. In this research, we have evaluated a government-funded innovation training course…

  17. Advanced excimer laser technologies enable green semiconductor manufacturing

    NASA Astrophysics Data System (ADS)

    Fukuda, Hitomi; Yoo, Youngsun; Minegishi, Yuji; Hisanaga, Naoto; Enami, Tatsuo

    2014-03-01

    "Green" has fast become an important and pervasive topic throughout many industries worldwide. Many companies, especially in the manufacturing industries, have taken steps to integrate green initiatives into their high-level corporate strategies. Governments have also been active in implementing various initiatives designed to increase corporate responsibility and accountability towards environmental issues. In the semiconductor manufacturing industry, there are growing concerns over future environmental impact as enormous fabs expand and new generation of equipments become larger and more powerful. To address these concerns, Gigaphoton has implemented various green initiatives for many years under the EcoPhoton™ program. The objective of this program is to drive innovations in technology and services that enable manufacturers to significantly reduce both the financial and environmental "green cost" of laser operations in high-volume manufacturing environment (HVM) - primarily focusing on electricity, gas and heat management costs. One example of such innovation is Gigaphoton's Injection-Lock system, which reduces electricity and gas utilization costs of the laser by up to 50%. Furthermore, to support the industry's transition from 300mm to the next generation 450mm wafers, technologies are being developed to create lasers that offer double the output power from 60W to 120W, but reducing electricity and gas consumption by another 50%. This means that the efficiency of lasers can be improve by up to 4 times in 450mm wafer production environments. Other future innovations include the introduction of totally Heliumfree Excimer lasers that utilize Nitrogen gas as its replacement for optical module purging. This paper discusses these and other innovations by Gigaphoton to enable green manufacturing.

  18. Large-scale data mining pilot project in human genome

    SciTech Connect

    Musick, R.; Fidelis, R.; Slezak, T.

    1997-05-01

    This whitepaper briefly describes a new, aggressive effort in large- scale data Livermore National Labs. The implications of `large- scale` will be clarified Section. In the short term, this effort will focus on several @ssion-critical questions of Genome project. We will adapt current data mining techniques to the Genome domain, to quantify the accuracy of inference results, and lay the groundwork for a more extensive effort in large-scale data mining. A major aspect of the approach is that we will be fully-staffed data warehousing effort in the human Genome area. The long term goal is strong applications- oriented research program in large-@e data mining. The tools, skill set gained will be directly applicable to a wide spectrum of tasks involving a for large spatial and multidimensional data. This includes applications in ensuring non-proliferation, stockpile stewardship, enabling Global Ecology (Materials Database Industrial Ecology), advancing the Biosciences (Human Genome Project), and supporting data for others (Battlefield Management, Health Care).

  19. How Large Scales Flows May Influence Solar Activity

    NASA Technical Reports Server (NTRS)

    Hathaway, D. H.

    2004-01-01

    Large scale flows within the solar convection zone are the primary drivers of the Sun's magnetic activity cycle and play important roles in shaping the Sun's magnetic field. Differential rotation amplifies the magnetic field through its shearing action and converts poloidal field into toroidal field. Poleward meridional flow near the surface carries magnetic flux that reverses the magnetic poles at about the time of solar maximum. The deeper, equatorward meridional flow can carry magnetic flux back toward the lower latitudes where it erupts through the surface to form tilted active regions that convert toroidal fields into oppositely directed poloidal fields. These axisymmetric flows are themselves driven by large scale convective motions. The effects of the Sun's rotation on convection produce velocity correlations that can maintain both the differential rotation and the meridional circulation. These convective motions can also influence solar activity directly by shaping the magnetic field pattern. While considerable theoretical advances have been made toward understanding these large scale flows, outstanding problems in matching theory to observations still remain.

  20. Advances in infrastructure support for flat panel display manufacturing

    NASA Astrophysics Data System (ADS)

    Bardsley, James N.; Ciesinski, Michael F.; Pinnel, M. Robert

    1997-07-01

    The success of the US display industry, both in providing high-performance displays for the US Department of Defense at reasonable cost and in capturing a significant share of the global civilian market, depends on maintaining technological leadership and on building efficient manufacturing capabilities. The US Display Consortium (USDC) was set up in 1993 by the US Government and private industry to guide the development of the infrastructure needed to support the manufacturing of flat panel displays. This mainly involves the supply of equipment and materials, but also includes the formation of partnerships and the training of a skilled labor force. Examples are given of successful development projects, some involving USDC participation, others through independent efforts of its member companies. These examples show that US-based companies can achieve leadership positions in this young and rapidly growing global market.

  1. Advanced manufacturing of SIMOX for low power electronics

    NASA Astrophysics Data System (ADS)

    Alles, Michael; Krull, Wade

    1996-04-01

    Silicon-on-insulator (SOI) has emerged as a key technology for low power electronics. The merits of SOI technology have been demonstrated, and are gaining acceptance in the semiconductor industry. In order for the SOI approach to be viable, several factors must converge, including the availability of SOI substrates in sufficient quantity, of acceptable quality, and at a competitive price. This work describes developments in SIMOX manufacturing technology and summarizes progress in each of these areas.

  2. Strategic methodology for advancing food manufacturing waste management paradigms

    NASA Astrophysics Data System (ADS)

    Rosentrater, Kurt A.

    2004-12-01

    As manufacturing industries become more cognizant of the ecological effects that their firms have on the surrounding environment, their waste streams are increasingly becoming viewed not as materials in need of disposal, but rather as resources that can be reused, recycled, or reprocessed into valuable products. Within the food processing sector there are many examples of value-added use of processing residues, although many of these focus solely on utilization as livestock feed ingredients. In addition to livestock feed, though, many other potential avenues exist for food processing waste streams, including food grade as well as industrial products. Unfortunately, the challenge to food processors is actually conducting the byproduct development work. In fact, no clear delineation exists that describes necessary components for an effective byproduct development program. This paper describes one such strategic methodology that could help fill this void. It consists of identifying, quantifying, characterizing, developing, analyzing, optimizing, and modeling the waste stream of interest. This approach to byproduct development represents an inclusive strategy that can be used to more effectively implement value-added utilization programs. Not only is this methodology applicable to food processing operations, but any industrial or manufacturing firm could benefit from instituting the formal components described here. Thus, this methodology, if implemented by a manufacturer, could hold the potential for increasing the probability of meeting the goals of industrial ecology, namely, that of developing and operating sustainable systems.

  3. Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect

    Willcox, Karen; Marzouk, Youssef

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their

  4. A novel precision face grinder for advanced optic manufacture

    NASA Astrophysics Data System (ADS)

    Guo, Y.; Peng, Y.; Wang, Z.; Yang, W.; Bi, G.; Ke, X.; Lin, X.

    2010-10-01

    In this paper, a large-scale NC precision face grinding machine is developed. This grinding machine can be used to the precision machining of brittle materials. The base and the machine body are independent and the whole structure is configured as a "T" type. The vertical column is seat onto the machine body at the middle center part through a double of precision lead rails. The grinding wheel is driven with a hydraulic dynamic and static spindle. The worktable is supported with a novel split thin film throttle hydrostatic lead rails. Each of motion-axis of the grinding machine is equipped with a Heidenhain absolute linear encoder, and then a closed feedback control system is formed with the adopted Fanuc 0i-MD NC system. The machine is capable of machining extremely flat surfaces on workpiece up to 800mmx600mm. The maximums load bearing of the work table is 620Kg. Furthermore, the roughness of the machined surfaces should be smooth (Ra<50nm-100nm), and the form accuracy less than 2μm (+/-1μm)/200x200mm. After the assembly and debugging of the surface grinding machine, the worktable surface has been self-ground with 60# grinding wheel and the form accuracy is 3μm/600mm×800mm. Then the grinding experiment was conduct on a BK7 flat optic glass element (400mmx250mm) and a ceramic disc (Φ100mm) with 60# grinding wheel, and the measuring results show the surface roughness and the form accuracy of the optic glass device are 0.07μm and 1.56μm/200x200mm, and these of the ceramic disc are 0.52μm and 1.28μm respectively.

  5. Large-Scale PV Integration Study

    SciTech Connect

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  6. Neutrinos and large-scale structure

    SciTech Connect

    Eisenstein, Daniel J.

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  7. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  8. Colloquium: Large scale simulations on GPU clusters

    NASA Astrophysics Data System (ADS)

    Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano

    2015-06-01

    Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.

  9. Nonthermal Components in the Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Miniati, Francesco

    2004-12-01

    I address the issue of nonthermal processes in the large scale structure of the universe. After reviewing the properties of cosmic shocks and their role as particle accelerators, I discuss the main observational results, from radio to γ-ray and describe the processes that are thought be responsible for the observed nonthermal emissions. Finally, I emphasize the important role of γ-ray astronomy for the progress in the field. Non detections at these photon energies have already allowed us important conclusions. Future observations will tell us more about the physics of the intracluster medium, shocks dissipation and CR acceleration.

  10. Large scale phononic metamaterials for seismic isolation

    SciTech Connect

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  11. Advanced computational research in materials processing for design and manufacturing

    SciTech Connect

    Zacharia, T.

    1995-04-01

    Advanced mathematical techniques and computer simulation play a major role in providing enhanced understanding of conventional and advanced materials processing operations. Development and application of mathematical models and computer simulation techniques can provide a quantitative understanding of materials processes and will minimize the need for expensive and time consuming trial- and error-based product development. As computer simulations and materials databases grow in complexity, high performance computing and simulation are expected to play a key role in supporting the improvements required in advanced material syntheses and processing by lessening the dependence on expensive prototyping and re-tooling. Many of these numerical models are highly compute-intensive. It is not unusual for an analysis to require several hours of computational time on current supercomputers despite the simplicity of the models being studied. For example, to accurately simulate the heat transfer in a 1-m{sup 3} block using a simple computational method requires 10`2 arithmetic operations per second of simulated time. For a computer to do the simulation in real time would require a sustained computation rate 1000 times faster than that achievable by current supercomputers. Massively parallel computer systems, which combine several thousand processors able to operate concurrently on a problem are expected to provide orders of magnitude increase in performance. This paper briefly describes advanced computational research in materials processing at ORNL. Continued development of computational techniques and algorithms utilizing the massively parallel computers will allow the simulation of conventional and advanced materials processes in sufficient generality.

  12. Part A - Advanced turbine systems. Part B - Materials/manufacturing element of the Advanced Turbine Systems Program

    SciTech Connect

    Karnitz, M.A.

    1996-06-01

    The DOE Offices of Fossil Energy and Energy Efficiency and Renewable Energy have initiated a program to develop advanced turbine systems for power generation. The objective of the Advanced Turbine Systems (ATS) Program is to develop ultra-high efficiency, environmentally superior, and cost competitive gas turbine systems for utility and industrial applications. One of the supporting elements of the ATS Program is the Materials/Manufacturing Technologies Task. The objective of this element is to address the critical materials and manufacturing issues for both industrial and utility gas turbines.

  13. Analysis of the influence of advanced materials for aerospace products R&D and manufacturing cost

    NASA Astrophysics Data System (ADS)

    Shen, A. W.; Guo, J. L.; Wang, Z. J.

    2015-12-01

    In this paper, we pointed out the deficiency of traditional cost estimation model about aerospace products Research & Development (R&D) and manufacturing based on analyzing the widely use of advanced materials in aviation products. Then we put up with the estimating formulas of cost factor, which representing the influences of advanced materials on the labor cost rate and manufacturing materials cost rate. The values ranges of the common advanced materials such as composite materials, titanium alloy are present in the labor and materials two aspects. Finally, we estimate the R&D and manufacturing cost of F/A-18, F/A- 22, B-1B and B-2 aircraft based on the common DAPCA IV model and the modified model proposed by this paper. The calculation results show that the calculation precision improved greatly by the proposed method which considering advanced materials. So we can know the proposed method is scientific and reasonable.

  14. Manufacturing Aspects of Advanced Polymer Composites for Automotive Applications

    NASA Astrophysics Data System (ADS)

    Friedrich, Klaus; Almajid, Abdulhakim A.

    2013-04-01

    Composite materials, in most cases fiber reinforced polymers, are nowadays used in many applications in which light weight and high specific modulus and strength are critical issues. The constituents of these materials and their special advantages relative to traditional materials are described in this paper. Further details are outlined regarding the present markets of polymer composites in Europe, and their special application in the automotive industry. In particular, the manufacturing of parts from thermoplastic as well as thermosetting, short and continuous fiber reinforced composites is emphasized.

  15. Large-scale Globally Propagating Coronal Waves

    NASA Astrophysics Data System (ADS)

    Warmuth, Alexander

    2015-09-01

    Large-scale, globally propagating wave-like disturbances have been observed in the solar chromosphere and by inference in the corona since the 1960s. However, detailed analysis of these phenomena has only been conducted since the late 1990s. This was prompted by the availability of high-cadence coronal imaging data from numerous spaced-based instruments, which routinely show spectacular globally propagating bright fronts. Coronal waves, as these perturbations are usually referred to, have now been observed in a wide range of spectral channels, yielding a wealth of information. Many findings have supported the "classical" interpretation of the disturbances: fast-mode MHD waves or shocks that are propagating in the solar corona. However, observations that seemed inconsistent with this picture have stimulated the development of alternative models in which "pseudo waves" are generated by magnetic reconfiguration in the framework of an expanding coronal mass ejection. This has resulted in a vigorous debate on the physical nature of these disturbances. This review focuses on demonstrating how the numerous observational findings of the last one and a half decades can be used to constrain our models of large-scale coronal waves, and how a coherent physical understanding of these disturbances is finally emerging.

  16. Local gravity and large-scale structure

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.

    1990-01-01

    The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.

  17. Large-scale smart passive system for civil engineering applications

    NASA Astrophysics Data System (ADS)

    Jung, Hyung-Jo; Jang, Dong-Doo; Lee, Heon-Jae; Cho, Sang-Won

    2008-03-01

    The smart passive system consisting of a magnetorheological (MR) damper and an electromagnetic induction (EMI) part has been recently proposed. An EMI part can generate the input current for an MR damper from vibration of a structure according to Faraday's law of electromagnetic induction. The control performance of the smart passive system has been demonstrated mainly by numerical simulations. It was verified from the numerical results that the system could be effective to reduce the structural responses in the cases of civil engineering structures such as buildings and bridges. On the other hand, the experimental validation of the system is not sufficiently conducted yet. In this paper, the feasibility of the smart passive system to real-scale structures is investigated. To do this, the large-scale smart passive system is designed, manufactured, and tested. The system consists of the large-capacity MR damper, which has a maximum force level of approximately +/-10,000N, a maximum stroke level of +/-35mm and the maximum current level of 3 A, and the large-scale EMI part, which is designed to generate sufficient induced current for the damper. The applicability of the smart passive system to large real-scale structures is examined through a series of shaking table tests. The magnitudes of the induced current of the EMI part with various sinusoidal excitation inputs are measured. According to the test results, the large-scale EMI part shows the possibility that it could generate the sufficient current or power for changing the damping characteristics of the large-capacity MR damper.

  18. Foundational perspectives on causality in large-scale brain networks.

    PubMed

    Mannino, Michael; Bressler, Steven L

    2015-12-01

    likelihood that a change in the activity of one neuronal population affects the activity in another. We argue that these measures access the inherently probabilistic nature of causal influences in the brain, and are thus better suited for large-scale brain network analysis than are DC-based measures. Our work is consistent with recent advances in the philosophical study of probabilistic causality, which originated from inherent conceptual problems with deterministic regularity theories. It also resonates with concepts of stochasticity that were involved in establishing modern physics. In summary, we argue that probabilistic causality is a conceptually appropriate foundation for describing neural causality in the brain. PMID:26429630

  19. Foundational perspectives on causality in large-scale brain networks

    NASA Astrophysics Data System (ADS)

    Mannino, Michael; Bressler, Steven L.

    2015-12-01

    likelihood that a change in the activity of one neuronal population affects the activity in another. We argue that these measures access the inherently probabilistic nature of causal influences in the brain, and are thus better suited for large-scale brain network analysis than are DC-based measures. Our work is consistent with recent advances in the philosophical study of probabilistic causality, which originated from inherent conceptual problems with deterministic regularity theories. It also resonates with concepts of stochasticity that were involved in establishing modern physics. In summary, we argue that probabilistic causality is a conceptually appropriate foundation for describing neural causality in the brain.

  20. Large scale water lens for solar concentration.

    PubMed

    Mondol, A S; Vogel, B; Bastian, G

    2015-06-01

    Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation. PMID:26072893

  1. Large Scale Quantum Simulations of Nuclear Pasta

    NASA Astrophysics Data System (ADS)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 < ρ < 0 . 10 fm-3, proton fractions 0 . 05

  2. Large-scale simulations of reionization

    SciTech Connect

    Kohler, Katharina; Gnedin, Nickolay Y.; Hamilton, Andrew J.S.; /JILA, Boulder

    2005-11-01

    We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.

  3. Large-scale databases of proper names.

    PubMed

    Conley, P; Burgess, C; Hage, D

    1999-05-01

    Few tools for research in proper names have been available--specifically, there is no large-scale corpus of proper names. Two corpora of proper names were constructed, one based on U.S. phone book listings, the other derived from a database of Usenet text. Name frequencies from both corpora were compared with human subjects' reaction times (RTs) to the proper names in a naming task. Regression analysis showed that the Usenet frequencies contributed to predictions of human RT, whereas phone book frequencies did not. In addition, semantic neighborhood density measures derived from the HAL corpus were compared with the subjects' RTs and found to be a better predictor of RT than was frequency in either corpus. These new corpora are freely available on line for download. Potentials for these corpora range from using the names as stimuli in experiments to using the corpus data in software applications. PMID:10495803

  4. Estimation of large-scale dimension densities.

    PubMed

    Raab, C; Kurths, J

    2001-07-01

    We propose a technique to calculate large-scale dimension densities in both higher-dimensional spatio-temporal systems and low-dimensional systems from only a few data points, where known methods usually have an unsatisfactory scaling behavior. This is mainly due to boundary and finite-size effects. With our rather simple method, we normalize boundary effects and get a significant correction of the dimension estimate. This straightforward approach is based on rather general assumptions. So even weak coherent structures obtained from small spatial couplings can be detected with this method, which is impossible by using the Lyapunov-dimension density. We demonstrate the efficiency of our technique for coupled logistic maps, coupled tent maps, the Lorenz attractor, and the Roessler attractor. PMID:11461376

  5. The challenge of large-scale structure

    NASA Astrophysics Data System (ADS)

    Gregory, S. A.

    1996-03-01

    The tasks that I have assumed for myself in this presentation include three separate parts. The first, appropriate to the particular setting of this meeting, is to review the basic work of the founding of this field; the appropriateness comes from the fact that W. G. Tifft made immense contributions that are not often realized by the astronomical community. The second task is to outline the general tone of the observational evidence for large scale structures. (Here, in particular, I cannot claim to be complete. I beg forgiveness from any workers who are left out by my oversight for lack of space and time.) The third task is to point out some of the major aspects of the field that may represent the clues by which some brilliant sleuth will ultimately figure out how galaxies formed.

  6. Engineering management of large scale systems

    NASA Technical Reports Server (NTRS)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  7. Grid sensitivity capability for large scale structures

    NASA Technical Reports Server (NTRS)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  8. The XMM Large Scale Structure Survey

    NASA Astrophysics Data System (ADS)

    Pierre, Marguerite

    2005-10-01

    We propose to complete, by an additional 5 deg2, the XMM-LSS Survey region overlying the Spitzer/SWIRE field. This field already has CFHTLS and Integral coverage, and will encompass about 10 deg2. The resulting multi-wavelength medium-depth survey, which complements XMM and Chandra deep surveys, will provide a unique view of large-scale structure over a wide range of redshift, and will show active galaxies in the full range of environments. The complete coverage by optical and IR surveys provides high-quality photometric redshifts, so that cosmological results can quickly be extracted. In the spirit of a Legacy survey, we will make the raw X-ray data immediately public. Multi-band catalogues and images will also be made available on short time scales.

  9. Estimation of large-scale dimension densities

    NASA Astrophysics Data System (ADS)

    Raab, Corinna; Kurths, Jürgen

    2001-07-01

    We propose a technique to calculate large-scale dimension densities in both higher-dimensional spatio-temporal systems and low-dimensional systems from only a few data points, where known methods usually have an unsatisfactory scaling behavior. This is mainly due to boundary and finite-size effects. With our rather simple method, we normalize boundary effects and get a significant correction of the dimension estimate. This straightforward approach is based on rather general assumptions. So even weak coherent structures obtained from small spatial couplings can be detected with this method, which is impossible by using the Lyapunov-dimension density. We demonstrate the efficiency of our technique for coupled logistic maps, coupled tent maps, the Lorenz attractor, and the Roessler attractor.

  10. Large-Scale Organization of Glycosylation Networks

    NASA Astrophysics Data System (ADS)

    Kim, Pan-Jun; Lee, Dong-Yup; Jeong, Hawoong

    2009-03-01

    Glycosylation is a highly complex process to produce a diverse repertoire of cellular glycans that are frequently attached to proteins and lipids. Glycans participate in fundamental biological processes including molecular trafficking and clearance, cell proliferation and apoptosis, developmental biology, immune response, and pathogenesis. N-linked glycans found on proteins are formed by sequential attachments of monosaccharides with the help of a relatively small number of enzymes. Many of these enzymes can accept multiple N-linked glycans as substrates, thus generating a large number of glycan intermediates and their intermingled pathways. Motivated by the quantitative methods developed in complex network research, we investigate the large-scale organization of such N-glycosylation pathways in a mammalian cell. The uncovered results give the experimentally-testable predictions for glycosylation process, and can be applied to the engineering of therapeutic glycoproteins.

  11. Large-scale sequential quadratic programming algorithms

    SciTech Connect

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  12. Supporting large-scale computational science

    SciTech Connect

    Musick, R

    1998-10-01

    A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.

  13. Supporting large-scale computational science

    SciTech Connect

    Musick, R., LLNL

    1998-02-19

    Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.

  14. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  15. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-02-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  16. Emerging technology: A key enabler for modernizing pharmaceutical manufacturing and advancing product quality.

    PubMed

    O'Connor, Thomas F; Yu, Lawrence X; Lee, Sau L

    2016-07-25

    Issues in product quality have produced recalls and caused drug shortages in United States (U.S.) in the past few years. These quality issues were often due to outdated manufacturing technologies and equipment as well as lack of an effective quality management system. To ensure consistent supply of safe, effective and high-quality drug products available to the patients, the U.S. Food and Drug Administration (FDA) supports modernizing pharmaceutical manufacturing for improvements in product quality. Specifically, five new initiatives are proposed here to achieve this goal. They include: (i) advancing regulatory science for pharmaceutical manufacturing; (ii) establishing a public-private institute for pharmaceutical manufacturing innovation; (iii) creating incentives for investment in the technological upgrade of manufacturing processes and facilities; (iv) leveraging external expertise for regulatory quality assessment of emerging technologies; and (v) promoting the international harmonization of approaches for expediting the global adoption of emerging technologies. PMID:27260134

  17. Gravity and large-scale nonlocal bias

    NASA Astrophysics Data System (ADS)

    Chan, Kwan Chuen; Scoccimarro, Román; Sheth, Ravi K.

    2012-04-01

    For Gaussian primordial fluctuations the relationship between galaxy and matter overdensities, bias, is most often assumed to be local at the time of observation in the large-scale limit. This hypothesis is however unstable under time evolution, we provide proofs under several (increasingly more realistic) sets of assumptions. In the simplest toy model galaxies are created locally and linearly biased at a single formation time, and subsequently move with the dark matter (no velocity bias) conserving their comoving number density (no merging). We show that, after this formation time, the bias becomes unavoidably nonlocal and nonlinear at large scales. We identify the nonlocal gravitationally induced fields in which the galaxy overdensity can be expanded, showing that they can be constructed out of the invariants of the deformation tensor (Galileons), the main signature of which is a quadrupole field in second-order perturbation theory. In addition, we show that this result persists if we include an arbitrary evolution of the comoving number density of tracers. We then include velocity bias, and show that new contributions appear; these are related to the breaking of Galilean invariance of the bias relation, a dipole field being the signature at second order. We test these predictions by studying the dependence of halo overdensities in cells of fixed dark matter density: measurements in simulations show that departures from the mean bias relation are strongly correlated with the nonlocal gravitationally induced fields identified by our formalism, suggesting that the halo distribution at the present time is indeed more closely related to the mass distribution at an earlier rather than present time. However, the nonlocality seen in the simulations is not fully captured by assuming local bias in Lagrangian space. The effects on nonlocal bias seen in the simulations are most important for the most biased halos, as expected from our predictions. Accounting for these

  18. Curvature constraints from large scale structure

    NASA Astrophysics Data System (ADS)

    Di Dio, Enea; Montanari, Francesco; Raccanelli, Alvise; Durrer, Ruth; Kamionkowski, Marc; Lesgourgues, Julien

    2016-06-01

    We modified the CLASS code in order to include relativistic galaxy number counts in spatially curved geometries; we present the formalism and study the effect of relativistic corrections on spatial curvature. The new version of the code is now publicly available. Using a Fisher matrix analysis, we investigate how measurements of the spatial curvature parameter ΩK with future galaxy surveys are affected by relativistic effects, which influence observations of the large scale galaxy distribution. These effects include contributions from cosmic magnification, Doppler terms and terms involving the gravitational potential. As an application, we consider angle and redshift dependent power spectra, which are especially well suited for model independent cosmological constraints. We compute our results for a representative deep, wide and spectroscopic survey, and our results show the impact of relativistic corrections on spatial curvature parameter estimation. We show that constraints on the curvature parameter may be strongly biased if, in particular, cosmic magnification is not included in the analysis. Other relativistic effects turn out to be subdominant in the studied configuration. We analyze how the shift in the estimated best-fit value for the curvature and other cosmological parameters depends on the magnification bias parameter, and find that significant biases are to be expected if this term is not properly considered in the analysis.

  19. Large scale digital atlases in neuroscience

    NASA Astrophysics Data System (ADS)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  20. Large-scale carbon fiber tests

    NASA Technical Reports Server (NTRS)

    Pride, R. A.

    1980-01-01

    A realistic release of carbon fibers was established by burning a minimum of 45 kg of carbon fiber composite aircraft structural components in each of five large scale, outdoor aviation jet fuel fire tests. This release was quantified by several independent assessments with various instruments developed specifically for these tests. The most likely values for the mass of single carbon fibers released ranged from 0.2 percent of the initial mass of carbon fiber for the source tests (zero wind velocity) to a maximum of 0.6 percent of the initial carbon fiber mass for dissemination tests (5 to 6 m/s wind velocity). Mean fiber lengths for fibers greater than 1 mm in length ranged from 2.5 to 3.5 mm. Mean diameters ranged from 3.6 to 5.3 micrometers which was indicative of significant oxidation. Footprints of downwind dissemination of the fire released fibers were measured to 19.1 km from the fire.

  1. Food appropriation through large scale land acquisitions

    NASA Astrophysics Data System (ADS)

    Rulli, Maria Cristina; D'Odorico, Paolo

    2014-05-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.

  2. Large Scale Computer Simulation of Erthocyte Membranes

    NASA Astrophysics Data System (ADS)

    Harvey, Cameron; Revalee, Joel; Laradji, Mohamed

    2007-11-01

    The cell membrane is crucial to the life of the cell. Apart from partitioning the inner and outer environment of the cell, they also act as a support of complex and specialized molecular machinery, important for both the mechanical integrity of the cell, and its multitude of physiological functions. Due to its relative simplicity, the red blood cell has been a favorite experimental prototype for investigations of the structural and functional properties of the cell membrane. The erythrocyte membrane is a composite quasi two-dimensional structure composed essentially of a self-assembled fluid lipid bilayer and a polymerized protein meshwork, referred to as the cytoskeleton or membrane skeleton. In the case of the erythrocyte, the polymer meshwork is mainly composed of spectrin, anchored to the bilayer through specialized proteins. Using a coarse-grained model, recently developed by us, of self-assembled lipid membranes with implicit solvent and using soft-core potentials, we simulated large scale red-blood-cells bilayers with dimensions ˜ 10-1 μm^2, with explicit cytoskeleton. Our aim is to investigate the renormalization of the elastic properties of the bilayer due to the underlying spectrin meshwork.

  3. Advanced carbon manufacturing for energy and biological applications

    NASA Astrophysics Data System (ADS)

    Turon Teixidor, Genis

    The science of miniaturization has experienced revolutionary advances during the last decades, witnessing the development of the Integrated Circuit and the emergence of MEMS and Nanotechnology. Particularly, MEMS technology has pioneered the use of non-traditional materials in microfabrication by including polymers, ceramics and composites to the well known list of metals and semiconductors. One of the latest additions to this set of materials is carbon, which represents a very important inclusion given its significance in electrochemical energy conversion systems and in applications where it is used as sensor probe material. For these applications, carbon is optimal in several counts: It has a wide electrochemical stability window, good electrical and thermal conductivity, high corrosion resistance and mechanical stability, and is available in high purity at a low cost. Furthermore carbon is biocompatible. This thesis presents several microfabricated devices that take advantage of these properties. The thesis has two clearly differentiated parts. In the first one, applications of micromachined carbon in the field of energy conversion and energy storage are presented. These applications include lithium ion micro batteries and the development of new carbon electrodes with fractal geometries. In the second part, the focus shifts to biological applications. First, the study of the interaction of living cells with micromachined carbon is presented, followed by the description of a sensor based on interdigitated nano-electrode arrays, and finally the development of the new instrumentation needed to address arrays of carbon electrodes, a multiplexed potentiostat. The underlying theme that connects all these seemingly different topics is the use of carbon microfabrication techniques in electrochemical systems.

  4. Evaluating Unmanned Aerial Platforms for Cultural Heritage Large Scale Mapping

    NASA Astrophysics Data System (ADS)

    Georgopoulos, A.; Oikonomou, C.; Adamopoulos, E.; Stathopoulou, E. K.

    2016-06-01

    When it comes to large scale mapping of limited areas especially for cultural heritage sites, things become critical. Optical and non-optical sensors are developed to such sizes and weights that can be lifted by such platforms, like e.g. LiDAR units. At the same time there is an increase in emphasis on solutions that enable users to get access to 3D information faster and cheaper. Considering the multitude of platforms, cameras and the advancement of algorithms in conjunction with the increase of available computing power this challenge should and indeed is further investigated. In this paper a short review of the UAS technologies today is attempted. A discussion follows as to their applicability and advantages, depending on their specifications, which vary immensely. The on-board cameras available are also compared and evaluated for large scale mapping. Furthermore a thorough analysis, review and experimentation with different software implementations of Structure from Motion and Multiple View Stereo algorithms, able to process such dense and mostly unordered sequence of digital images is also conducted and presented. As test data set, we use a rich optical and thermal data set from both fixed wing and multi-rotor platforms over an archaeological excavation with adverse height variations and using different cameras. Dense 3D point clouds, digital terrain models and orthophotos have been produced and evaluated for their radiometric as well as metric qualities.

  5. An informal paper on large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Ho, Y. C.

    1975-01-01

    Large scale systems are defined as systems requiring more than one decision maker to control the system. Decentralized control and decomposition are discussed for large scale dynamic systems. Information and many-person decision problems are analyzed.

  6. Isotope separation and advanced manufacturing technology. Volume 2, No. 2, Semiannual report, April--September 1993

    SciTech Connect

    Kan, Tehmanu; Carpenter, J.

    1993-12-31

    This is the second issue of a semiannual report for the Isotope Separation and Advanced Manufacturing (ISAM) Technology Program at Lawrence Livermore National Laboratory. Primary objectives of the ISAM Program include: the Uranium Atomic Vapor Laser Isotope Separation (U-AVLIS) process, and advanced manufacturing technologies which include industrial laser materials processing and new manufacturing technologies for uranium, plutonium, and other strategically important materials in support of DOE and other national applications. Topics included in this issue are: production plant product system conceptual design, development and operation of a solid-state switch for thyratron replacement, high-performance optical components for high average power laser systems, use of diode laser absorption spectroscopy for control of uranium vaporization rates, a two-dimensional time dependent hydrodynamical ion extraction model, and design of a formaldehyde photodissociation process for carbon and oxygen isotope separation.

  7. Advanced Manufacturing as an Online Case Study for Global Geography Education

    ERIC Educational Resources Information Center

    Glass, Michael R.; Kalafsky, Ronald V.; Drake, Dawn M.

    2013-01-01

    Advanced manufacturing continues to be an important sector for emerging and industrialized economies, therefore, remaining an important topic for economic geography education. This article describes a case study created for the Association of American Geographer's Center for Global Geography Education and its implementation. The international…

  8. Integrated computer aided planning and manufacture of advanced technology jet engines

    NASA Astrophysics Data System (ADS)

    Subhas, B. K.; George, Chacko; Arul Raj, A.

    1987-10-01

    This paper highlights an attempt at evolving a computer aided manufacturing system on a personal computer. A case study of an advanced technology jet engine component is included to illustrate various outputs from the system. The proposed system could be an alternate solution to sophisticated and expensive CAD/CAM workstations.

  9. National Skill Standards for Advanced High Performance Manufacturing. Version 2.1.

    ERIC Educational Resources Information Center

    National Coalition for Advanced Manufacturing, Washington, DC.

    This document presents and discusses the national skill standards for advanced high-performance manufacturing that were developed during a project that was commissioned by the U.S. Department of Education. The introduction explains the need for national skill standards. Discussed in the next three sections are the following: benefits of national…

  10. Overview of the manufacturing sequence of the Advanced Solid Rocket Motor

    NASA Technical Reports Server (NTRS)

    Chapman, John S.; Nix, Michael B.

    1992-01-01

    The manufacturing sequence of NASA's new Advanced Solid Rocket Motor, developed as a replacement of the Space Shuttle's existing Redesigned Solid Rocket Motor, is overviewed. Special attention is given to the case preparation, the propellant mix/cast, the nondestructuve evaluation, the motor finishing, and the refurbishment. The fabrication sequences of the case, the nozzle, and the igniter are described.

  11. Sensitivity technologies for large scale simulation.

    SciTech Connect

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  12. International space station. Large scale integration approach

    NASA Astrophysics Data System (ADS)

    Cohen, Brad

    The International Space Station is the most complex large scale integration program in development today. The approach developed for specification, subsystem development, and verification lay a firm basis on which future programs of this nature can be based. International Space Station is composed of many critical items, hardware and software, built by numerous International Partners, NASA Institutions, and U.S. Contractors and is launched over a period of five years. Each launch creates a unique configuration that must be safe, survivable, operable, and support ongoing assembly (assemblable) to arrive at the assembly complete configuration in 2003. The approaches to integrating each of the modules into a viable spacecraft and continue the assembly is a challenge in itself. Added to this challenge are the severe schedule constraints and lack of an "Iron Bird", which prevents assembly and checkout of each on-orbit configuration prior to launch. This paper will focus on the following areas: 1) Specification development process explaining how the requirements and specifications were derived using a modular concept driven by launch vehicle capability. Each module is composed of components of subsystems versus completed subsystems. 2) Approach to stage (each stage consists of the launched module added to the current on-orbit spacecraft) specifications. Specifically, how each launched module and stage ensures support of the current and future elements of the assembly. 3) Verification approach, due to the schedule constraints, is primarily analysis supported by testing. Specifically, how are the interfaces ensured to mate and function on-orbit when they cannot be mated before launch. 4) Lessons learned. Where can we improve this complex system design and integration task?

  13. Synchronization of coupled large-scale Boolean networks

    SciTech Connect

    Li, Fangfei

    2014-03-15

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  14. Large-Scale Sequencing: The Future of Genomic Sciences Colloquium

    SciTech Connect

    Margaret Riley; Merry Buckley

    2009-01-01

    Genetic sequencing and the various molecular techniques it has enabled have revolutionized the field of microbiology. Examining and comparing the genetic sequences borne by microbes - including bacteria, archaea, viruses, and microbial eukaryotes - provides researchers insights into the processes microbes carry out, their pathogenic traits, and new ways to use microorganisms in medicine and manufacturing. Until recently, sequencing entire microbial genomes has been laborious and expensive, and the decision to sequence the genome of an organism was made on a case-by-case basis by individual researchers and funding agencies. Now, thanks to new technologies, the cost and effort of sequencing is within reach for even the smallest facilities, and the ability to sequence the genomes of a significant fraction of microbial life may be possible. The availability of numerous microbial genomes will enable unprecedented insights into microbial evolution, function, and physiology. However, the current ad hoc approach to gathering sequence data has resulted in an unbalanced and highly biased sampling of microbial diversity. A well-coordinated, large-scale effort to target the breadth and depth of microbial diversity would result in the greatest impact. The American Academy of Microbiology convened a colloquium to discuss the scientific benefits of engaging in a large-scale, taxonomically-based sequencing project. A group of individuals with expertise in microbiology, genomics, informatics, ecology, and evolution deliberated on the issues inherent in such an effort and generated a set of specific recommendations for how best to proceed. The vast majority of microbes are presently uncultured and, thus, pose significant challenges to such a taxonomically-based approach to sampling genome diversity. However, we have yet to even scratch the surface of the genomic diversity among cultured microbes. A coordinated sequencing effort of cultured organisms is an appropriate place to begin

  15. Large scale rigidity-based flexibility analysis of biomolecules

    PubMed Central

    Streinu, Ileana

    2016-01-01

    KINematics And RIgidity (KINARI) is an on-going project for in silico flexibility analysis of proteins. The new version of the software, Kinari-2, extends the functionality of our free web server KinariWeb, incorporates advanced web technologies, emphasizes the reproducibility of its experiments, and makes substantially improved tools available to the user. It is designed specifically for large scale experiments, in particular, for (a) very large molecules, including bioassemblies with high degree of symmetry such as viruses and crystals, (b) large collections of related biomolecules, such as those obtained through simulated dilutions, mutations, or conformational changes from various types of dynamics simulations, and (c) is intended to work as seemlessly as possible on the large, idiosyncratic, publicly available repository of biomolecules, the Protein Data Bank. We describe the system design, along with the main data processing, computational, mathematical, and validation challenges underlying this phase of the KINARI project. PMID:26958583

  16. Computational solutions to large-scale data management and analysis

    PubMed Central

    Schadt, Eric E.; Linderman, Michael D.; Sorenson, Jon; Lee, Lawrence; Nolan, Garry P.

    2011-01-01

    Today we can generate hundreds of gigabases of DNA and RNA sequencing data in a week for less than US$5,000. The astonishing rate of data generation by these low-cost, high-throughput technologies in genomics is being matched by that of other technologies, such as real-time imaging and mass spectrometry-based flow cytometry. Success in the life sciences will depend on our ability to properly interpret the large-scale, high-dimensional data sets that are generated by these technologies, which in turn requires us to adopt advances in informatics. Here we discuss how we can master the different types of computational environments that exist — such as cloud and heterogeneous computing — to successfully tackle our big data problems. PMID:20717155

  17. Applications of large-scale density functional theory in biology

    NASA Astrophysics Data System (ADS)

    Cole, Daniel J.; Hine, Nicholas D. M.

    2016-10-01

    Density functional theory (DFT) has become a routine tool for the computation of electronic structure in the physics, materials and chemistry fields. Yet the application of traditional DFT to problems in the biological sciences is hindered, to a large extent, by the unfavourable scaling of the computational effort with system size. Here, we review some of the major software and functionality advances that enable insightful electronic structure calculations to be performed on systems comprising many thousands of atoms. We describe some of the early applications of large-scale DFT to the computation of the electronic properties and structure of biomolecules, as well as to paradigmatic problems in enzymology, metalloproteins, photosynthesis and computer-aided drug design. With this review, we hope to demonstrate that first principles modelling of biological structure-function relationships are approaching a reality.

  18. Applications of large-scale density functional theory in biology.

    PubMed

    Cole, Daniel J; Hine, Nicholas D M

    2016-10-01

    Density functional theory (DFT) has become a routine tool for the computation of electronic structure in the physics, materials and chemistry fields. Yet the application of traditional DFT to problems in the biological sciences is hindered, to a large extent, by the unfavourable scaling of the computational effort with system size. Here, we review some of the major software and functionality advances that enable insightful electronic structure calculations to be performed on systems comprising many thousands of atoms. We describe some of the early applications of large-scale DFT to the computation of the electronic properties and structure of biomolecules, as well as to paradigmatic problems in enzymology, metalloproteins, photosynthesis and computer-aided drug design. With this review, we hope to demonstrate that first principles modelling of biological structure-function relationships are approaching a reality. PMID:27494095

  19. Large-Scale Quantitative Analysis of Painting Arts

    PubMed Central

    Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong

    2014-01-01

    Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images – the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances. PMID:25501877

  20. Planning under uncertainty solving large-scale stochastic linear programs

    SciTech Connect

    Infanger, G. . Dept. of Operations Research Technische Univ., Vienna . Inst. fuer Energiewirtschaft)

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  1. Large-Scale Quantitative Analysis of Painting Arts

    NASA Astrophysics Data System (ADS)

    Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong

    2014-12-01

    Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images - the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances.

  2. Large-Scale Graphene Film Deposition for Monolithic Device Fabrication

    NASA Astrophysics Data System (ADS)

    Al-shurman, Khaled

    Since 1958, the concept of integrated circuit (IC) has achieved great technological developments and helped in shrinking electronic devices. Nowadays, an IC consists of more than a million of compacted transistors. The majority of current ICs use silicon as a semiconductor material. According to Moore's law, the number of transistors built-in on a microchip can be double every two years. However, silicon device manufacturing reaches its physical limits. To explain, there is a new trend to shrinking circuitry to seven nanometers where a lot of unknown quantum effects such as tunneling effect can not be controlled. Hence, there is an urgent need for a new platform material to replace Si. Graphene is considered a promising material with enormous potential applications in many electronic and optoelectronics devices due to its superior properties. There are several techniques to produce graphene films. Among these techniques, chemical vapor deposition (CVD) offers a very convenient method to fabricate films for large-scale graphene films. Though CVD method is suitable for large area growth of graphene, the need for transferring a graphene film to silicon-based substrates is required. Furthermore, the graphene films thus achieved are, in fact, not single crystalline. Also, graphene fabrication utilizing Cu and Ni at high growth temperature contaminates the substrate that holds Si CMOS circuitry and CVD chamber as well. So, lowering the deposition temperature is another technological milestone for the successful adoption of graphene in integrated circuits fabrication. In this research, direct large-scale graphene film fabrication on silicon based platform (i.e. SiO2 and Si3N4) at low temperature was achieved. With a focus on low-temperature graphene growth, hot-filament chemical vapor deposition (HF-CVD) was utilized to synthesize graphene film using 200 nm thick nickel film. Raman spectroscopy was utilized to examine graphene formation on the bottom side of the Ni film

  3. Advanced I/O for large-scale scientific applications.

    SciTech Connect

    Klasky, Scott; Schwan, Karsten; Oldfield, Ron A.; Lofstead, Gerald F., II

    2010-01-01

    As scientific simulations scale to use petascale machines and beyond, the data volumes generated pose a dual problem. First, with increasing machine sizes, the careful tuning of IO routines becomes more and more important to keep the time spent in IO acceptable. It is not uncommon, for instance, to have 20% of an application's runtime spent performing IO in a 'tuned' system. Careful management of the IO routines can move that to 5% or even less in some cases. Second, the data volumes are so large, on the order of 10s to 100s of TB, that trying to discover the scientifically valid contributions requires assistance at runtime to both organize and annotate the data. Waiting for offline processing is not feasible due both to the impact on the IO system and the time required. To reduce this load and improve the ability of scientists to use the large amounts of data being produced, new techniques for data management are required. First, there is a need for techniques for efficient movement of data from the compute space to storage. These techniques should understand the underlying system infrastructure and adapt to changing system conditions. Technologies include aggregation networks, data staging nodes for a closer parity to the IO subsystem, and autonomic IO routines that can detect system bottlenecks and choose different approaches, such as splitting the output into multiple targets, staggering output processes. Such methods must be end-to-end, meaning that even with properly managed asynchronous techniques, it is still essential to properly manage the later synchronous interaction with the storage system to maintain acceptable performance. Second, for the data being generated, annotations and other metadata must be incorporated to help the scientist understand output data for the simulation run as a whole, to select data and data features without concern for what files or other storage technologies were employed. All of these features should be attained while maintaining a simple deployment for the science code and eliminating the need for allocation of additional computational resources.

  4. Recent advances in large-scale protein interactome mapping.

    PubMed

    Mehta, Virja; Trinkle-Mulcahy, Laura

    2016-01-01

    Protein-protein interactions (PPIs) underlie most, if not all, cellular functions. The comprehensive mapping of these complex networks of stable and transient associations thus remains a key goal, both for systems biology-based initiatives (where it can be combined with other 'omics' data to gain a better understanding of functional pathways and networks) and for focused biological studies. Despite the significant challenges of such an undertaking, major strides have been made over the past few years. They include improvements in the computation prediction of PPIs and the literature curation of low-throughput studies of specific protein complexes, but also an increase in the deposition of high-quality data from non-biased high-throughput experimental PPI mapping strategies into publicly available databases. PMID:27158474

  5. Recent advances in large-scale protein interactome mapping

    PubMed Central

    Mehta, Virja; Trinkle-Mulcahy, Laura

    2016-01-01

    Protein-protein interactions (PPIs) underlie most, if not all, cellular functions. The comprehensive mapping of these complex networks of stable and transient associations thus remains a key goal, both for systems biology-based initiatives (where it can be combined with other ‘omics’ data to gain a better understanding of functional pathways and networks) and for focused biological studies. Despite the significant challenges of such an undertaking, major strides have been made over the past few years. They include improvements in the computation prediction of PPIs and the literature curation of low-throughput studies of specific protein complexes, but also an increase in the deposition of high-quality data from non-biased high-throughput experimental PPI mapping strategies into publicly available databases. PMID:27158474

  6. Design advanced for large-scale, economic, floating LNG plant

    SciTech Connect

    Naklie, M.M.

    1997-06-30

    A floating LNG plant design has been developed which is technically feasible, economical, safe, and reliable. This technology will allow monetization of small marginal fields and improve the economics of large fields. Mobil`s world-scale plant design has a capacity of 6 million tons/year of LNG and up to 55,000 b/d condensate produced from 1 bcfd of feed gas. The plant would be located on a large, secure, concrete barge with a central moonpool. LNG storage is provided for 250,000 cu m and condensate storage for 650,000 bbl. And both products are off-loaded from the barge. Model tests have verified the stability of the barge structure: barge motions are low enough to permit the plant to continue operation in a 100-year storm in the Pacific Rim. Moreover, the barge is spread-moored, eliminating the need for a turret and swivel. Because the design is generic, the plant can process a wide variety of feed gases and operate in different environments, should the plant be relocated. This capability potentially gives the plant investment a much longer project life because its use is not limited to the life of only one producing area.

  7. Large-Scale Spacecraft Fire Safety Tests

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; Toth, Balazs; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Jomaas, Grunde

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  8. Noninvasive sensors for in-situ process monitoring and control in advanced microelectronics manufacturing

    NASA Astrophysics Data System (ADS)

    Moslehi, Mehrdad M.

    1991-04-01

    The combination of noninvasive in-situ monitoring sensors single-wafer processing modules vacuum-integrated cluster tools and computer-integrated manufacturing (CIM) can provide a suitable fabrication environment for flexible and high-yield advanced semiconductor device manufacturing. The use of in-situ sensors for monitoring of equipment process and wafer parameters results in increased equipment/process up-time reduced process and device parameter spread improved cluster tool reliability and functionality and reduced overall device manufacturing cycle time. This paper will present an overview of the main features and impact of noninvasive in-situ monitoring sensors for semiconductor device manufacturing applications. Specific examples will be presented for the use of critical sensors in conjunction with cluster tools for advanced CMOS device processing. A noninvasive temperature sensor will be presented which can monitor true wafer temperature via infrared (5. 35 jtm) pyrometery and laser-assisted real-time spectral wafer emissivity measurements. This sensor design eliminates any. temperature measurement errors caused by the heating lamp radiation and wafer emissivity variations. 1. SENSORS: MOTIVATIONS AND IMPACT Semiconductor chip manufacturing factories usually employ well-established statistical process control (SPC) techniques to minimize the process parameter deviations and to increase the device fabrication yield. The conventional fabrication environments rely on controlling a limited set of critical equipment and process parameters (e. g. process pressure gas flow rates substrate temperature RF power etc. ) however most of the significant wafer process and equipment parameters of interest are not monitored in real

  9. How Large Scale Flows in the Solar Convection Zone may Influence Solar Activity

    NASA Technical Reports Server (NTRS)

    Hathaway, D. H.

    2004-01-01

    Large scale flows within the solar convection zone are the primary drivers of the Sun s magnetic activity cycle. Differential rotation can amplify the magnetic field and convert poloidal fields into toroidal fields. Poleward meridional flow near the surface can carry magnetic flux that reverses the magnetic poles and can convert toroidal fields into poloidal fields. The deeper, equatorward meridional flow can carry magnetic flux toward the equator where it can reconnect with oppositely directed fields in the other hemisphere. These axisymmetric flows are themselves driven by large scale convective motions. The effects of the Sun s rotation on convection produce velocity correlations that can maintain the differential rotation and meridional circulation. These convective motions can influence solar activity themselves by shaping the large-scale magnetic field pattern. While considerable theoretical advances have been made toward understanding these large scale flows, outstanding problems in matching theory to observations still remain.

  10. Seminar for High School Students “Practice on Manufacturing Technology by Advanced Machine Tools”

    NASA Astrophysics Data System (ADS)

    Marui, Etsuo; Yamawaki, Masao; Taga, Yuken; Omoto, Ken'ichi; Miyaji, Reiji; Ogura, Takahiro; Tsubata, Yoko; Sakai, Toshimasa

    The seminar ‘Practice on Manufacturing Technology by Advanced Machine Tools’ for high school students was held at the supporting center for technology education of Gifu University, under the sponsorship of the Japan Society of Mechanical Engineers. This seminar was held, hoping that many students become interested in manufacturing through the experience of the seminar. Operating CNC milling machine and CNC wire-cut electric discharge machine, they made original nameplates. Participants made the program to control CNC machine tools themselves. In this report, some valuable results obtained through such experience are explained.

  11. Integrated Design for Manufacturing of Braided Preforms for Advanced Composites Part I: 2D Braiding

    NASA Astrophysics Data System (ADS)

    Gao, Yan Tao; Ko, Frank K.; Hu, Hong

    2013-12-01

    This paper presents a 2D braiding design system for advanced textile structural composites was based on dynamic models. A software package to assist in the design of braided preform manufacturing has been developed. The package allows design parameters (machine speeds, fiber volume fraction, tightness factor, etc.) to be easily obtained and the relationships between said parameters to be demonstrated graphically. The fabirc geometry model (FGM) method was adopted to evaluate the mechanical properties of the composites. Experimental evidence demonstrates the success of the use of dynamic models in the design software for the manufacture of braided fabric preforms.

  12. Large-scale assembly of colloidal particles

    NASA Astrophysics Data System (ADS)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  13. Population generation for large-scale simulation

    NASA Astrophysics Data System (ADS)

    Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul

    2005-05-01

    Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more

  14. Timing signatures of large scale solar eruptions

    NASA Astrophysics Data System (ADS)

    Balasubramaniam, K. S.; Hock-Mysliwiec, Rachel; Henry, Timothy; Kirk, Michael S.

    2016-05-01

    We examine the timing signatures of large solar eruptions resulting in flares, CMEs and Solar Energetic Particle events. We probe solar active regions from the chromosphere through the corona, using data from space and ground-based observations, including ISOON, SDO, GONG, and GOES. Our studies include a number of flares and CMEs of mostly the M- and X-strengths as categorized by GOES. We find that the chromospheric signatures of these large eruptions occur 5-30 minutes in advance of coronal high temperature signatures. These timing measurements are then used as inputs to models and reconstruct the eruptive nature of these systems, and explore their utility in forecasts.

  15. State of the Art in Large-Scale Soil Moisture Monitoring

    NASA Technical Reports Server (NTRS)

    Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.; Zreda, Marek G.

    2013-01-01

    Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.

  16. Multitree Algorithms for Large-Scale Astrostatistics

    NASA Astrophysics Data System (ADS)

    March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.

    2012-03-01

    this number every week, resulting in billions of objects. At such scales, even linear-time analysis operations present challenges, particularly since statistical analyses are inherently interactive processes, requiring that computations complete within some reasonable human attention span. The quadratic (or worse) runtimes of straightforward implementations become quickly unbearable. Examples of applications. These analysis subroutines occur ubiquitously in astrostatistical work. We list just a few examples. The need to cross-match objects across different catalogs has led to various algorithms, which at some point perform an AllNN computation. 2-point and higher-order spatial correlations for the basis of spatial statistics, and are utilized in astronomy to compare the spatial structures of two datasets, such as an observed sample and a theoretical sample, for example, forming the basis for two-sample hypothesis testing. Friends-of-friends clustering is often used to identify halos in data from astrophysical simulations. Minimum spanning tree properties have also been proposed as statistics of large-scale structure. Comparison of the distributions of different kinds of objects requires accurate density estimation, for which KDE is the overall statistical method of choice. The prediction of redshifts from optical data requires accurate regression, for which kernel regression is a powerful method. The identification of objects of various types in astronomy, such as stars versus galaxies, requires accurate classification, for which KDA is a powerful method. Overview. In this chapter, we will briefly sketch the main ideas behind recent fast algorithms which achieve, for example, linear runtimes for pairwise-distance problems, or similarly dramatic reductions in computational growth. In some cases, the runtime orders for these algorithms are mathematically provable statements, while in others we have only conjectures backed by experimental observations for the time being

  17. TransHab: NASA's Large-Scale Inflatable Spacecraft

    NASA Technical Reports Server (NTRS)

    delaFuente, Horacio; Raboin, Jasen L.; Spexarth, Gary R.; Valle, Gerard D.

    2000-01-01

    TransHab is a, 27-foot diameter by 40-foot, lightweight inflatable habitation module for space applications. TransHab consists of a lightweight graphite-composite core, 11-foot diameter by 23-foot tall, surrounded by a 27-foot diameter inflatable shell. Originally envisioned to be the habitation module of an interplanetary transit vehicle, TransHab is currently being considered as a module for use on the International Space Station (ISS). During the past two years, several tests have been performed at the NASA/Johnson Space Center to demonstrate and prove the technologies required in building a large-scale inflatable habitation module. This paper discusses the results of these tests which including the following: 1) a structural integrity development test article hydJ"Ostatically tested to four times ambient pressure, 2) a full-scale development test article manufactured, assembled, folded and deployed at vacuum, and 3) extensive hypervelocity impact testing of the micro meteoroid and orbital debris protection system.

  18. Bio-inspired wooden actuators for large scale applications.

    PubMed

    Rüggeberg, Markus; Burgert, Ingo

    2015-01-01

    Implementing programmable actuation into materials and structures is a major topic in the field of smart materials. In particular the bilayer principle has been employed to develop actuators that respond to various kinds of stimuli. A multitude of small scale applications down to micrometer size have been developed, but up-scaling remains challenging due to either limitations in mechanical stiffness of the material or in the manufacturing processes. Here, we demonstrate the actuation of wooden bilayers in response to changes in relative humidity, making use of the high material stiffness and a good machinability to reach large scale actuation and application. Amplitude and response time of the actuation were measured and can be predicted and controlled by adapting the geometry and the constitution of the bilayers. Field tests in full weathering conditions revealed long-term stability of the actuation. The potential of the concept is shown by a first demonstrator. With the sensor and actuator intrinsically incorporated in the wooden bilayers, the daily change in relative humidity is exploited for an autonomous and solar powered movement of a tracker for solar modules. PMID:25835386

  19. Bio-Inspired Wooden Actuators for Large Scale Applications

    PubMed Central

    Rüggeberg, Markus; Burgert, Ingo

    2015-01-01

    Implementing programmable actuation into materials and structures is a major topic in the field of smart materials. In particular the bilayer principle has been employed to develop actuators that respond to various kinds of stimuli. A multitude of small scale applications down to micrometer size have been developed, but up-scaling remains challenging due to either limitations in mechanical stiffness of the material or in the manufacturing processes. Here, we demonstrate the actuation of wooden bilayers in response to changes in relative humidity, making use of the high material stiffness and a good machinability to reach large scale actuation and application. Amplitude and response time of the actuation were measured and can be predicted and controlled by adapting the geometry and the constitution of the bilayers. Field tests in full weathering conditions revealed long-term stability of the actuation. The potential of the concept is shown by a first demonstrator. With the sensor and actuator intrinsically incorporated in the wooden bilayers, the daily change in relative humidity is exploited for an autonomous and solar powered movement of a tracker for solar modules. PMID:25835386

  20. Probes of large-scale structure in the universe

    NASA Technical Reports Server (NTRS)

    Suto, Yasushi; Gorski, Krzysztof; Juszkiewicz, Roman; Silk, Joseph

    1988-01-01

    A general formalism is developed which shows that the gravitational instability theory for the origin of the large-scale structure of the universe is now capable of critically confronting observational results on cosmic background radiation angular anisotropies, large-scale bulk motions, and large-scale clumpiness in the galaxy counts. The results indicate that presently advocated cosmological models will have considerable difficulty in simultaneously explaining the observational results.

  1. Bioinspired large-scale aligned porous materials assembled with dual temperature gradients

    PubMed Central

    Bai, Hao; Chen, Yuan; Delattre, Benjamin; Tomsia, Antoni P.; Ritchie, Robert O.

    2015-01-01

    Natural materials, such as bone, teeth, shells, and wood, exhibit outstanding properties despite being porous and made of weak constituents. Frequently, they represent a source of inspiration to design strong, tough, and lightweight materials. Although many techniques have been introduced to create such structures, a long-range order of the porosity as well as a precise control of the final architecture remain difficult to achieve. These limitations severely hinder the scale-up fabrication of layered structures aimed for larger applications. We report on a bidirectional freezing technique to successfully assemble ceramic particles into scaffolds with large-scale aligned, lamellar, porous, nacre-like structure and long-range order at the centimeter scale. This is achieved by modifying the cold finger with a polydimethylsiloxane (PDMS) wedge to control the nucleation and growth of ice crystals under dual temperature gradients. Our approach could provide an effective way of manufacturing novel bioinspired structural materials, in particular advanced materials such as composites, where a higher level of control over the structure is required. PMID:26824062

  2. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations.

    PubMed

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts. PMID:25993414

  3. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations

    PubMed Central

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts. PMID:25993414

  4. Cryogenic Control Architecture for Large-Scale Quantum Computing

    NASA Astrophysics Data System (ADS)

    Hornibrook, J. M.; Colless, J. I.; Conway Lamb, I. D.; Pauka, S. J.; Lu, H.; Gossard, A. C.; Watson, J. D.; Gardner, G. C.; Fallahi, S.; Manfra, M. J.; Reilly, D. J.

    2015-02-01

    Solid-state qubits have recently advanced to the level that enables them, in principle, to be scaled up into fault-tolerant quantum computers. As these physical qubits continue to advance, meeting the challenge of realizing a quantum machine will also require the development of new supporting devices and control architectures with complexity far beyond the systems used in today's few-qubit experiments. Here, we report a microarchitecture for controlling and reading out qubits during the execution of a quantum algorithm such as an error-correcting code. We demonstrate the basic principles of this architecture using a cryogenic switch matrix implemented via high-electron-mobility transistors and a new kind of semiconductor device based on gate-switchable capacitance. The switch matrix is used to route microwave waveforms to qubits under the control of a field-programmable gate array, also operating at cryogenic temperatures. Taken together, these results suggest a viable approach for controlling large-scale quantum systems using semiconductor technology.

  5. Global Wildfire Forecasts Using Large Scale Climate Indices

    NASA Astrophysics Data System (ADS)

    Shen, Huizhong; Tao, Shu

    2016-04-01

    Using weather readings, fire early warning can provided forecast 4-6 hour in advance to minimize fire loss. The benefit would be dramatically enhanced if relatively accurate long-term projection can be also provided. Here we present a novel method for predicting global fire season severity (FSS) at least three months in advance using multiple large-scale climate indices (CIs). The predictive ability is proven effective for various geographic locations and resolution. Globally, as well as in most continents, the El Niño Southern Oscillation (ENSO) is the dominant driving force controlling interannual FSS variability, whereas other CIs also play indispensable roles. We found that a moderate El Niño event is responsible for 465 (272-658 as interquartile range) Tg carbon release and an annual increase of 29,500 (24,500-34,800) deaths from inhalation exposure to air pollutants. Southeast Asia accounts for half of the deaths. Both intercorrelation and interaction of WPs and CIs are revealed, suggesting possible climate-induced modification of fire responses to weather conditions. Our models can benefit fire management in response to climate change.

  6. The advanced manufacturing science and technology program. FY 95 Annual Report

    SciTech Connect

    Hill, J.

    1996-03-01

    This is the Fiscal Year 1995 Annual Report for the Advanced Manufacturing Science and Technology (AMST) sector of Los Alamos Tactical Goal 6, Industrial Partnering. During this past fiscal year, the AMST project leader formed a committee whose members represented the divisions and program offices with a manufacturing interest to examine the Laboratory`s expertise and needs in manufacturing. From a list of about two hundred interest areas, the committee selected nineteen of the most pressing needs for weapon manufacturing. Based upon Los Alamos mission requirements and the needs of the weapon manufacturing (Advanced Design and Production Technologies (ADaPT)) program plan and the other tactical goals, the committee selected four of the nineteen areas for strategic planning and possible industrial partnering. The areas selected were Casting Technology, Constitutive Modeling, Non-Destructive Testing and Evaluation, and Polymer Aging and Lifetime Prediction. For each area, the AMST committee formed a team to write a roadmap and serve as a partnering technical consultant. To date, the roadmaps have been completed for each of the four areas. The Casting Technology and Polymer Aging teams are negotiating with specific potential partners now, at the close of the fiscal year. For each focus area we have created a list of existing collaborations and other ongoing partnering activities. In early Fiscal Year 1996, we will continue to develop partnerships in these four areas. Los Alamos National Laboratory instituted the tactical goals for industrial partnering to focus our institutional resources on partnerships that enhance core competencies and capabilities required to meet our national security mission of reducing the nuclear danger. The second industry sector targeted by Tactical Goal 6 was the chemical industry. Tactical Goal 6 is championed by the Industrial Partnership Office.

  7. Study of the structure and physical properties of quasicrystals using large scale facilities

    NASA Astrophysics Data System (ADS)

    de Boissieu, Marc

    2012-04-01

    Quasicrystals have been puzzling scientists since their discovery. In this article we review some of the recent advances in this field and show how the use of large scale facilities has brought in decisive information for the understanding of their structure and physical properties.

  8. The Challenge of Large-Scale Literacy Improvement

    ERIC Educational Resources Information Center

    Levin, Ben

    2010-01-01

    This paper discusses the challenge of making large-scale improvements in literacy in schools across an entire education system. Despite growing interest and rhetoric, there are very few examples of sustained, large-scale change efforts around school-age literacy. The paper reviews 2 instances of such efforts, in England and Ontario. After…

  9. INTERNATIONAL WORKSHOP ON LARGE-SCALE REFORESTATION: PROCEEDINGS

    EPA Science Inventory

    The purpose of the workshop was to identify major operational and ecological considerations needed to successfully conduct large-scale reforestation projects throughout the forested regions of the world. Large-scale" for this workshop means projects where, by human effort, approx...

  10. Using Large-Scale Assessment Scores to Determine Student Grades

    ERIC Educational Resources Information Center

    Miller, Tess

    2013-01-01

    Many Canadian provinces provide guidelines for teachers to determine students' final grades by combining a percentage of students' scores from provincial large-scale assessments with their term scores. This practice is thought to hold students accountable by motivating them to put effort into completing the large-scale assessment, thereby…

  11. Analysis of labor productivity using large-scale data of firm's financial statements

    NASA Astrophysics Data System (ADS)

    Ikeda, Y.; Souma, W.; Aoyama, H.; Fujiwara, Y.; Iyetomi, H.

    2010-08-01

    We investigated labor productivity distribution by analyzing large-scale financial statement data consisting of listed and unlisted Japanese firms to clarify the characteristics of the Japanese labor market. Both high and low productivity sides of the labor productivity distribution follows the power-law distribution. Large inequality in the low productivity side was observed only for the manufacturing sectors in Japan fiscal year (JFY) 1999 and observed for both the manufacturing and non-manufacturing sectors in JFY 2002. The decline in the Japanese GDP in JFY 1999 and JFY 2002 were coincided with the large inequality in the low productivity side of the distribution. A lower peak was found for all non-manufacturing sectors. This might be the origin of the low productivity of the non-manufacturing sectors reported in recent economic studies.

  12. New Paradigms in International University/Industry/Government Cooperation. Canada-China Collaboration in Advanced Manufacturing Technologies.

    ERIC Educational Resources Information Center

    Bulgak, Akif Asil; Liquan, He

    1996-01-01

    A Chinese university and a Canadian university collaborated on an advanced manufacturing technologies project designed to address human resource development needs in China. The project featured university/industry/government partnership and attention to environmental issues. (SK)

  13. Advanced manufacturing technology effectiveness: A review of literature and some issues

    NASA Astrophysics Data System (ADS)

    Goyal, Sanjeev; Grover, Sandeep

    2012-09-01

    Advanced manufacturing technology (AMT) provides advantages to manufacturing managers in terms of flexibility, quality, reduced delivery times, and global competitiveness. Although a large number of publications had presented the importance of this technology, only a few had delved into related literature review. Considering the importance of this technology and the recent contributions by various authors, the present paper conducts a more comprehensive review. Literature was reviewed in a way that will help researchers, academicians, and practitioners to take a closer look at the implementation, evaluation, and justification of the AMT. The authors reviewed various papers, proposed a different classification scheme, and identified certain gaps that will provide hints for further research in AMT management.

  14. An experiment in remote manufacturing using the advanced communications technology satellite

    NASA Technical Reports Server (NTRS)

    Tsatsoulis, Costas; Frost, Victor

    1991-01-01

    The goal of the completed project was to develop an experiment in remote manufacturing that would use the capabilities of the ACTS satellite. A set of possible experiments that could be performed using the Advanced Communications Technology Satellite (ACTS), and which would perform remote manufacturing using a laser cutter and an integrated circuit testing machine are described in detail. The proposed design is shown to be a feasible solution to the offered problem and it takes into consideration the constraints that were placed on the experiment. In addition, we have developed two more experiments that are included in this report: backup of rural telecommunication networks, and remote use of Synthetic Aperture Radar (SAR) data analysis for on-site collection of glacier scattering data in the Antarctic.

  15. ANALYSIS OF TURBULENT MIXING JETS IN LARGE SCALE TANK

    SciTech Connect

    Lee, S; Richard Dimenna, R; Robert Leishear, R; David Stefanko, D

    2007-03-28

    Flow evolution models were developed to evaluate the performance of the new advanced design mixer pump for sludge mixing and removal operations with high-velocity liquid jets in one of the large-scale Savannah River Site waste tanks, Tank 18. This paper describes the computational model, the flow measurements used to provide validation data in the region far from the jet nozzle, the extension of the computational results to real tank conditions through the use of existing sludge suspension data, and finally, the sludge removal results from actual Tank 18 operations. A computational fluid dynamics approach was used to simulate the sludge removal operations. The models employed a three-dimensional representation of the tank with a two-equation turbulence model. Both the computational approach and the models were validated with onsite test data reported here and literature data. The model was then extended to actual conditions in Tank 18 through a velocity criterion to predict the ability of the new pump design to suspend settled sludge. A qualitative comparison with sludge removal operations in Tank 18 showed a reasonably good comparison with final results subject to significant uncertainties in actual sludge properties.

  16. Implicit solvers for large-scale nonlinear problems

    SciTech Connect

    Keyes, D E; Reynolds, D; Woodward, C S

    2006-07-13

    Computational scientists are grappling with increasingly complex, multi-rate applications that couple such physical phenomena as fluid dynamics, electromagnetics, radiation transport, chemical and nuclear reactions, and wave and material propagation in inhomogeneous media. Parallel computers with large storage capacities are paving the way for high-resolution simulations of coupled problems; however, hardware improvements alone will not prove enough to enable simulations based on brute-force algorithmic approaches. To accurately capture nonlinear couplings between dynamically relevant phenomena, often while stepping over rapid adjustments to quasi-equilibria, simulation scientists are increasingly turning to implicit formulations that require a discrete nonlinear system to be solved for each time step or steady state solution. Recent advances in iterative methods have made fully implicit formulations a viable option for solution of these large-scale problems. In this paper, we overview one of the most effective iterative methods, Newton-Krylov, for nonlinear systems and point to software packages with its implementation. We illustrate the method with an example from magnetically confined plasma fusion and briefly survey other areas in which implicit methods have bestowed important advantages, such as allowing high-order temporal integration and providing a pathway to sensitivity analyses and optimization. Lastly, we overview algorithm extensions under development motivated by current SciDAC applications.

  17. Lift-off of large-scale ultrathin nanomembranes

    NASA Astrophysics Data System (ADS)

    Miller, Joshua J.; Carter, Robert N.; McNabb, Kelly B.; DesOrmeaux, Jon-Paul S.; Striemer, Christopher C.; Winans, Joshua D.; Gaborski, Thomas R.

    2015-01-01

    Ultrathin silicon-based nanomembranes hold significant promise for advancements in applications ranging from separations to tissue engineering. Widespread application of these membranes has been hindered by their small active area, which typically ranges from square micrometers to square millimeters. These membranes are typically supported on silicon chips as small windows as a result of a time-consuming through-wafer etch process. This approach results in a relatively low active area and can be challenging to integrate into devices because of the rigid silicon support. In this paper, a lift-off approach is demonstrated wherein the membrane is supported by a polymeric scaffold and separated from the wafer to enable fabrication of membrane sheets (>75 cm2) with >80% active area. The wafer-scale lift-off process is demonstrated with 50 nm thick microporous and nanoporous silicon nitride (SiN) membranes. Release of large-scale SiN membranes is accomplished with both wet and dry lift-off techniques. The dry approach uses XeF2 gas to etch a sacrificial silicon film, while the wet etch uses buffered oxide etchant to remove a silicon dioxide sacrificial layer. Finally, it is demonstrated that lift-off membranes have excellent optical properties and can be used to support cell culture on a conventional scale.

  18. Development of large-scale functional networks over the lifespan.

    PubMed

    Schlee, Winfried; Leirer, Vera; Kolassa, Stephan; Thurm, Franka; Elbert, Thomas; Kolassa, Iris-Tatjana

    2012-10-01

    The development of large-scale functional organization of the human brain across the lifespan is not well understood. Here we used magnetoencephalographic recordings of 53 adults (ages 18-89) to characterize functional brain networks in the resting state. Slow frequencies engage larger networks than higher frequencies and show different development over the lifespan. Networks in the delta (2-4 Hz) frequency range decrease, while networks in the beta/gamma frequency range (> 16 Hz) increase in size with advancing age. Results show that the right frontal lobe and the temporal areas in both hemispheres are important relay stations in the expanding high-frequency networks. Neuropsychological tests confirmed the tendency of cognitive decline with older age. The decrease in visual memory and visuoconstructive functions was strongly associated with the age-dependent enhancement of functional connectivity in both temporal lobes. Using functional network analysis this study elucidates important neuronal principles underlying age-related cognitive decline paving mental deterioration in senescence. PMID:22236372

  19. Large-Scale NASA Science Applications on the Columbia Supercluster

    NASA Technical Reports Server (NTRS)

    Brooks, Walter

    2005-01-01

    Columbia, NASA's newest 61 teraflops supercomputer that became operational late last year, is a highly integrated Altix cluster of 10,240 processors, and was named to honor the crew of the Space Shuttle lost in early 2003. Constructed in just four months, Columbia increased NASA's computing capability ten-fold, and revitalized the Agency's high-end computing efforts. Significant cutting-edge science and engineering simulations in the areas of space and Earth sciences, as well as aeronautics and space operations, are already occurring on this largest operational Linux supercomputer, demonstrating its capacity and capability to accelerate NASA's space exploration vision. The presentation will describe how an integrated environment consisting not only of next-generation systems, but also modeling and simulation, high-speed networking, parallel performance optimization, and advanced data analysis and visualization, is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions. The talk will conclude by discussing how NAS partnered with various NASA centers, other government agencies, computer industry, and academia, to create a national resource in large-scale modeling and simulation.

  20. Process control of large-scale finite element simulation software

    SciTech Connect

    Spence, P.A.; Weingarten, L.I.; Schroder, K.; Tung, D.M.; Sheaffer, D.A.

    1996-02-01

    We have developed a methodology for coupling large-scale numerical codes with process control algorithms. Closed-loop simulations were demonstrated using the Sandia-developed finite element thermal code TACO and the commercially available finite element thermal-mechanical code ABAQUS. This new capability enables us to use computational simulations for designing and prototyping advanced process-control systems. By testing control algorithms on simulators before building and testing hardware, enormous time and cost savings can be realized. The need for a closed-loop simulation capability was demonstrated in a detailed design study of a rapid-thermal-processing reactor under development by CVC Products Inc. Using a thermal model of the RTP system as a surrogate for the actual hardware, we were able to generate response data needed for controller design. We then evaluated the performance of both the controller design and the hardware design by using the controller to drive the finite element model. The controlled simulations provided data on wafer temperature uniformity as a function of ramp rate, temperature sensor locations, and controller gain. This information, which is critical to reactor design, cannot be obtained from typical open-loop simulations.

  1. Food security through large scale investments in agriculture

    NASA Astrophysics Data System (ADS)

    Rulli, M.; D'Odorico, P.

    2013-12-01

    Most of the human appropriation of freshwater resources is for food production. There is some concern that in the near future the finite freshwater resources available on Earth might not be sufficient to meet the increasing human demand for agricultural products. In the late 1700s Malthus argued that in the long run the humanity would not have enough resources to feed itself. Malthus' analysis, however, did not account for the emergence of technological innovations that could increase the rate of food production. The modern and contemporary history has seen at least three major technological advances that have increased humans' access to food, namely, the industrial revolution, the green revolution, and the intensification of global trade. Here we argue that a fourth revolution has just started to happen. It involves foreign direct investments in agriculture, which intensify the crop yields of potentially highly productive agricultural lands by introducing the use of more modern technologies. The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions for commercial farming will bring the technology required to close the existing yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of verified land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with large scale land acquisitions. We

  2. Distribution probability of large-scale landslides in central Nepal

    NASA Astrophysics Data System (ADS)

    Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi

    2014-12-01

    Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.

  3. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    SciTech Connect

    Gerber, Richard A.; Wasserman, Harvey J.

    2012-03-02

    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  4. Large Scale Chemical Cross-linking Mass Spectrometry Perspectives

    PubMed Central

    Zybailov, Boris L.; Glazko, Galina V.; Jaiswal, Mihir; Raney, Kevin D.

    2014-01-01

    The spectacular heterogeneity of a complex protein mixture from biological samples becomes even more difficult to tackle when one’s attention is shifted towards different protein complex topologies, transient interactions, or localization of PPIs. Meticulous protein-by-protein affinity pull-downs and yeast-two-hybrid screens are the two approaches currently used to decipher proteome-wide interaction networks. Another method is to employ chemical cross-linking, which gives not only identities of interactors, but could also provide information on the sites of interactions and interaction interfaces. Despite significant advances in mass spectrometry instrumentation over the last decade, mapping Protein-Protein Interactions (PPIs) using chemical cross-linking remains time consuming and requires substantial expertise, even in the simplest of systems. While robust methodologies and software exist for the analysis of binary PPIs and also for the single protein structure refinement using cross-linking-derived constraints, undertaking a proteome-wide cross-linking study is highly complex. Difficulties include i) identifying cross-linkers of the right length and selectivity that could capture interactions of interest; ii) enrichment of the cross-linked species; iii) identification and validation of the cross-linked peptides and cross-linked sites. In this review we examine existing literature aimed at the large-scale protein cross-linking and discuss possible paths for improvement. We also discuss short-length cross-linkers of broad specificity such as formaldehyde and diazirine-based photo-cross-linkers. These cross-linkers could potentially capture many types of interactions, without strict requirement for a particular amino-acid to be present at a given protein-protein interface. How these shortlength, broad specificity cross-linkers be applied to proteome-wide studies? We will suggest specific advances in methodology, instrumentation and software that are needed to

  5. Chronic, Wireless Recordings of Large Scale Brain Activity in Freely Moving Rhesus Monkeys

    PubMed Central

    Schwarz, David A.; Lebedev, Mikhail A.; Hanson, Timothy L.; Dimitrov, Dragan F.; Lehew, Gary; Meloy, Jim; Rajangam, Sankaranarayani; Subramanian, Vivek; Ifft, Peter J.; Li, Zheng; Ramakrishnan, Arjun; Tate, Andrew; Zhuang, Katie; Nicolelis, Miguel A.L.

    2014-01-01

    Advances in techniques for recording large-scale brain activity contribute to both the elucidation of neurophysiological principles and the development of brain-machine interfaces (BMIs). Here we describe a neurophysiological paradigm for performing tethered and wireless large-scale recordings based on movable volumetric three-dimensional (3D) multielectrode implants. This approach allowed us to isolate up to 1,800 units per animal and simultaneously record the extracellular activity of close to 500 cortical neurons, distributed across multiple cortical areas, in freely behaving rhesus monkeys. The method is expandable, in principle, to thousands of simultaneously recorded channels. It also allows increased recording longevity (5 consecutive years), and recording of a broad range of behaviors, e.g. social interactions, and BMI paradigms in freely moving primates. We propose that wireless large-scale recordings could have a profound impact on basic primate neurophysiology research, while providing a framework for the development and testing of clinically relevant neuroprostheses. PMID:24776634

  6. Large-scale simulations of layered double hydroxide nanocomposite materials

    NASA Astrophysics Data System (ADS)

    Thyveetil, Mary-Ann

    Layered double hydroxides (LDHs) have the ability to intercalate a multitude of anionic species. Atomistic simulation techniques such as molecular dynamics have provided considerable insight into the behaviour of these materials. We review these techniques and recent algorithmic advances which considerably improve the performance of MD applications. In particular, we discuss how the advent of high performance computing and computational grids has allowed us to explore large scale models with considerable ease. Our simulations have been heavily reliant on computational resources on the UK's NGS (National Grid Service), the US TeraGrid and the Distributed European Infrastructure for Supercomputing Applications (DEISA). In order to utilise computational grids we rely on grid middleware to launch, computationally steer and visualise our simulations. We have integrated the RealityGrid steering library into the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) 1 . which has enabled us to perform re mote computational steering and visualisation of molecular dynamics simulations on grid infrastruc tures. We also use the Application Hosting Environment (AHE) 2 in order to launch simulations on remote supercomputing resources and we show that data transfer rates between local clusters and super- computing resources can be considerably enhanced by using optically switched networks. We perform large scale molecular dynamics simulations of MgiAl-LDHs intercalated with either chloride ions or a mixture of DNA and chloride ions. The systems exhibit undulatory modes, which are suppressed in smaller scale simulations, caused by the collective thermal motion of atoms in the LDH layers. Thermal undulations provide elastic properties of the system including the bending modulus, Young's moduli and Poisson's ratios. To explore the interaction between LDHs and DNA. we use molecular dynamics techniques to per form simulations of double stranded, linear and plasmid DNA up

  7. Strategies for Interactive Visualization of Large Scale Climate Simulations

    NASA Astrophysics Data System (ADS)

    Xie, J.; Chen, C.; Ma, K.; Parvis

    2011-12-01

    With the advances in computational methods and supercomputing technology, climate scientists are able to perform large-scale simulations at unprecedented resolutions. These simulations produce data that are time-varying, multivariate, and volumetric, and the data may contain thousands of time steps with each time step having billions of voxels and each voxel recording dozens of variables. Visualizing such time-varying 3D data to examine correlations between different variables thus becomes a daunting task. We have been developing strategies for interactive visualization and correlation analysis of multivariate data. The primary task is to find connection and correlation among data. Given the many complex interactions among the Earth's oceans, atmosphere, land, ice and biogeochemistry, and the sheer size of observational and climate model data sets, interactive exploration helps identify which processes matter most for a particular climate phenomenon. We may consider time-varying data as a set of samples (e.g., voxels or blocks), each of which is associated with a vector of representative or collective values over time. We refer to such a vector as a temporal curve. Correlation analysis thus operates on temporal curves of data samples. A temporal curve can be treated as a two-dimensional function where the two dimensions are time and data value. It can also be treated as a point in the high-dimensional space. In this case, to facilitate effective analysis, it is often necessary to transform temporal curve data from the original space to a space of lower dimensionality. Clustering and segmentation of temporal curve data in the original or transformed space provides us a way to categorize and visualize data of different patterns, which reveals connection or correlation of data among different variables or at different spatial locations. We have employed the power of GPU to enable interactive correlation visualization for studying the variability and correlations of a

  8. Mechanisation of large-scale agricultural fields in developing countries - a review.

    PubMed

    Onwude, Daniel I; Abdulstter, Rafia; Gomes, Chandima; Hashim, Norhashila

    2016-09-01

    Mechanisation of large-scale agricultural fields often requires the application of modern technologies such as mechanical power, automation, control and robotics. These technologies are generally associated with relatively well developed economies. The application of these technologies in some developing countries in Africa and Asia is limited by factors such as technology compatibility with the environment, availability of resources to facilitate the technology adoption, cost of technology purchase, government policies, adequacy of technology and appropriateness in addressing the needs of the population. As a result, many of the available resources have been used inadequately by farmers, who continue to rely mostly on conventional means of agricultural production, using traditional tools and equipment in most cases. This has led to low productivity and high cost of production among others. Therefore this paper attempts to evaluate the application of present day technology and its limitations to the advancement of large-scale mechanisation in developing countries of Africa and Asia. Particular emphasis is given to a general understanding of the various levels of mechanisation, present day technology, its management and application to large-scale agricultural fields. This review also focuses on/gives emphasis to future outlook that will enable a gradual, evolutionary and sustainable technological change. The study concludes that large-scale-agricultural farm mechanisation for sustainable food production in Africa and Asia must be anchored on a coherent strategy based on the actual needs and priorities of the large-scale farmers. © 2016 Society of Chemical Industry. PMID:26940194

  9. Large-Scale Disturbance Events in Terrestrial Ecosystems Detected using Global Satellite Data Sets

    NASA Astrophysics Data System (ADS)

    Potter, C.; Tan, P.; Kumar, V.; Klooster, S.

    2004-12-01

    Studies are being conducted to evaluate patterns in a 19-year record of global satellite observations of vegetation phenology from the Advanced Very High Resolution Radiometer (AVHRR), as a means to characterize large-scale ecosystem disturbance events and regimes. The fraction absorbed of photosynthetically active radiation (FPAR) by vegetation canopies worldwide has been computed at a monthly time interval from 1982 to 2000 and gridded at a spatial resolution of 8-km globally. Potential disturbance events were identified in the FPAR time series by locating anomalously low values (FPAR-LO) that lasted longer than 12 consecutive months at any 8-km pixel. We can find verifiable evidence of numerous disturbance types across North America, including major regional patterns of cold and heat waves, forest fires, tropical storms, and large-scale forest logging. Based on this analysis, an historical picture is emerging of periodic droughts and heat waves, possibly coupled with herbivorous insect outbreaks, as among the most important causes of ecosystem disturbance in North America. In South America, large areas of northeastern Brazil appear to have been impacted in the early 1990s by severe drought. Amazon tropical forest disturbance can be detected at large scales particularly in the mid 1990s. In Asia, large-scale disturbance events appear in the mid 1980s and the late 1990s across boreal and temperate forest zones, as well as in cropland areas of western India. In northern Europe and central Africa, large-scale forest disturbance appears in the mid 1990s.

  10. Advanced manufacturing development of a composite empennage component for L-1011 aircraft

    NASA Technical Reports Server (NTRS)

    Alva, T.; Henkel, J.; Johnson, R.; Carll, B.; Jackson, A.; Mosesian, B.; Brozovic, R.; Obrien, R.; Eudaily, R.

    1982-01-01

    This is the final report of technical work conducted during the fourth phase of a multiphase program having the objective of the design, development and flight evaluation of an advanced composite empennage component manufactured in a production environment at a cost competitive with those of its metal counterpart, and at a weight savings of at least 20 percent. The empennage component selected for this program is the vertical fin box of the L-1011 aircraft. The box structure extends from the fuselage production joint to the tip rib and includes front and rear spars. During Phase 4 of the program, production quality tooling was designed and manufactured to produce three sets of covers, ribs, spars, miscellaneous parts, and subassemblies to assemble three complete ACVF units. Recurring and nonrecurring cost data were compiled and documented in the updated producibility/design to cost plan. Nondestruct inspections, quality control tests, and quality acceptance tests were performed in accordance with the quality assurance plan and the structural integrity control plan. Records were maintained to provide traceability of material and parts throughout the manufacturing development phase. It was also determined that additional tooling would not be required to support the current and projected L-1011 production rate.

  11. Interpretation of large-scale deviations from the Hubble flow

    NASA Astrophysics Data System (ADS)

    Grinstein, B.; Politzer, H. David; Rey, S.-J.; Wise, Mark B.

    1987-03-01

    The theoretical expectation for large-scale streaming velocities relative to the Hubble flow is expressed in terms of statistical correlation functions. Only for objects that trace the mass would these velocities have a simple cosmological interpretation. If some biasing effects the objects' formation, then nonlinear gravitational evolution is essential to predicting the expected large-scale velocities, which also depend on the nature of the biasing.

  12. Large scale suppression of scalar power on a spatial condensation

    NASA Astrophysics Data System (ADS)

    Kouwn, Seyen; Kwon, O.-Kab; Oh, Phillial

    2015-03-01

    We consider a deformed single-field inflation model in terms of three SO(3) symmetric moduli fields. We find that spatially linear solutions for the moduli fields induce a phase transition during the early stage of the inflation and the suppression of scalar power spectrum at large scales. This suppression can be an origin of anomalies for large-scale perturbation modes in the cosmological observation.

  13. Large-scale motions in a plane wall jet

    NASA Astrophysics Data System (ADS)

    Gnanamanickam, Ebenezer; Jonathan, Latim; Shibani, Bhatt

    2015-11-01

    The dynamic significance of large-scale motions in turbulent boundary layers have been the focus of several recent studies, primarily focussing on canonical flows - zero pressure gradient boundary layers, flows within pipes and channels. This work presents an investigation into the large-scale motions in a boundary layer that is used as the prototypical flow field for flows with large-scale mixing and reactions, the plane wall jet. An experimental investigation is carried out in a plane wall jet facility designed to operate at friction Reynolds numbers Reτ > 1000 , which allows for the development of a significant logarithmic region. The streamwise turbulent intensity across the boundary layer is decomposed into small-scale (less than one integral length-scale δ) and large-scale components. The small-scale energy has a peak in the near-wall region associated with the near-wall turbulent cycle as in canonical boundary layers. However, eddies of large-scales are the dominating eddies having significantly higher energy, than the small-scales across almost the entire boundary layer even at the low to moderate Reynolds numbers under consideration. The large-scales also appear to amplitude and frequency modulate the smaller scales across the entire boundary layer.

  14. Reduced toxicity polyester resins and microvascular pre-preg tapes for advanced composites manufacturing

    NASA Astrophysics Data System (ADS)

    Poillucci, Richard

    Advanced composites manufacturing broadly encapsulates topics ranging from matrix chemistries to automated machines that lay-up fiber-reinforced materials. Environmental regulations are stimulating research to reduce matrix resin formulation toxicity. At present, composites fabricated with polyester resins expose workers to the risk of contact with and inhalation of styrene monomer, which is a potential carcinogen, neurotoxin, and respiratory irritant. The first primary goal of this thesis is to reduce the toxicity associated with polyester resins by: (1) identification of potential monomers to replace styrene, (2) determination of monomer solubility within the polyester, and (3) investigation of approaches to rapidly screen a large resin composition parameter space. Monomers are identified based on their ability to react with polyester and their toxicity as determined by the Globally Harmonized System (GHS) and a green screen method. Solubilities were determined by the Hoftyzer -- Van Krevelen method, Hansen solubility parameter database, and experimental mixing of monomers. A combinatorial microfluidic mixing device is designed and tested to obtain distinct resin compositions from two input chemistries. The push for safer materials is complemented by a thrust for multifunctional composites. The second primary goal of this thesis is to design and implement the manufacture of sacrificial fiber materials suitable for use in automated fiber placement of microvascaular multifunctional composites. Two key advancements are required to achieve this goal: (1) development of a roll-to-roll method to place sacrificial fibers onto carbon fiber pre-preg tape; and (2) demonstration of feasible manufacture of microvascular carbon fiber plates with automated fiber placement. An automated method for placing sacrificial fibers onto carbon fiber tapes is designed and a prototype implemented. Carbon fiber tows with manual placement of sacrificial fibers is implemented within an

  15. Large-Scale Identification of Virulence Genes from Streptococcus pneumoniae

    PubMed Central

    Polissi, Alessandra; Pontiggia, Andrea; Feger, Georg; Altieri, Mario; Mottl, Harald; Ferrari, Livia; Simon, Daniel

    1998-01-01

    Streptococcus pneumoniae is the major cause of bacterial pneumonia, and it is also responsible for otitis media and meningitis in children. Apart from the capsule, the virulence factors of this pathogen are not completely understood. Recent technical advances in the field of bacterial pathogenesis (in vivo expression technology and signature-tagged mutagenesis [STM]) have allowed a large-scale identification of virulence genes. We have adapted to S. pneumoniae the STM technique, originally used for the discovery of Salmonella genes involved in pathogenicity. A library of pneumococcal chromosomal fragments (400 to 600 bp) was constructed in a suicide plasmid vector carrying unique DNA sequence tags and a chloramphenicol resistance marker. The recent clinical isolate G54 was transformed with this library. Chloramphenicol-resistant mutants were obtained by homologous recombination, resulting in genes inactivated by insertion of the suicide vector carrying a unique tag. In a mouse pneumonia model, 1.250 candidate clones were screened; 200 of these were not recovered from the lungs were therefore considered virulence-attenuated mutants. The regions flanking the chloramphenicol gene of the attenuated mutants were amplified by inverse PCR and sequenced. The sequence analysis showed that the 200 mutants had insertions in 126 different genes that could be grouped in six classes: (i) known pneumococcal virulence genes; (ii) genes involved in metabolic pathways; (iii) genes encoding proteases; (iv) genes coding for ATP binding cassette transporters; (v) genes encoding proteins involved in DNA recombination/repair; and (vi) DNA sequences that showed similarity to hypothetical genes with unknown function. To evaluate the virulence attenuation for each mutant, all 126 clones were individually analyzed in a mouse septicemia model. Not all mutants selected in the pneumonia model were confirmed in septicemia, thus indicating the existence of virulence factors specific for pneumonia

  16. ELECTRON PARAMAGNETIC RESONANCE DOSIMETRY FOR A LARGE-SCALE RADIATION INCIDENT

    PubMed Central

    Swartz, Harold M.; Flood, Ann Barry; Williams, Benjamin B.; Dong, Ruhong; Swarts, Steven G.; He, Xiaoming; Grinberg, Oleg; Sidabras, Jason; Demidenko, Eugene; Gui, Jiang; Gladstone, David J.; Jarvis, Lesley A.; Kmiec, Maciej M.; Kobayashi, Kyo; Lesniewski, Piotr N.; Marsh, Stephen D.P.; Matthews, Thomas P.; Nicolalde, Roberto J.; Pennington, Patrick M.; Raynolds, Timothy; Salikhov, Ildar; Wilcox, Dean E.; Zaki, Bassem I.

    2013-01-01

    With possibilities for radiation terrorism and intensified concerns about nuclear accidents since the recent Fukushima Daiichi event, the potential exposure of large numbers of individuals to radiation that could lead to acute clinical effects has become a major concern. For the medical community to cope with such an event and avoid overwhelming the medical care system, it is essential to identify not only individuals who have received clinically significant exposures and need medical intervention but also those who do not need treatment. The ability of electron paramagnetic resonance to measure radiation-induced paramagnetic species, which persist in certain tissues (e.g., teeth, fingernails, toenails, bone, and hair), has led this technique to become a prominent method for screening significantly exposed individuals. Although the technical requirements needed to develop this method for effective application in a radiation event are daunting, remarkable progress has been made. In collaboration with General Electric, and through funding committed by the Biomedical Advanced Research and Development Authority, electron paramagnetic resonance tooth dosimetry of the upper incisors is being developed to become a Food and Drug Administration-approved and manufacturable device designed to carry out triage for a threshold dose of 2 Gy. Significant progress has also been made in the development of electron paramagnetic resonance nail dosimetry based on measurements of nails in situ under point-of-care conditions, and in the near future this may become a second field-ready technique. Based on recent progress in measurements of nail clippings, we anticipate that this technique may be implementable at remotely located laboratories to provide additional information when the measurements of dose on site need to be supplemented. We conclude that electron paramagnetic resonance dosimetry is likely to be a useful part of triage for a large-scale radiation incident. PMID:22850230

  17. Scalable techniques for the analysis of large-scale materials data

    NASA Astrophysics Data System (ADS)

    Samudrala, Sai Kiranmayee

    Many physical systems of fundamental and industrial importance are significantly affected by the development of new materials. By establishing process-structure-property relationship one can design new, tailor-made materials that possess desired properties. Conventional experimental and analytical techniques like first-principle calculations, though accurate, are extremely tedious and resource-intensive resulting in a significant gap between the time of discovery of a new material and the time it is put to engineering practice. Furthermore, huge amounts of data produced by these techniques poses a tough challenge in terms of analysis. This thesis addresses the challenges in analyzing huge datasets by leveraging the advanced mathematical and computational techniques in order to establish process-structure-property relationship of materials. First of the three parts of this thesis describes application of dimensionality reduction (DR) techniques to analyze a dataset of apatites described in structural descriptor space. This data reveals interesting correlations between structural descriptors like ionic radius and covalence with characteristic properties like apatite stability; information crucial to promote the use of apatites as an antidote in lead poisoning. Second part of the thesis describes a parallel spectral DR framework that can process thousands of points lying in a million dimensional space, which is beyond the reach of currently available tools. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify the optimal processing parameters. Third significant approach discussed in this thesis includes applying well-studied graph-theoretic methods to analyze large datasets produced from Atom Probe Tomography (APT) to quantify the morphology of precipitates in a solvent material. The above three mathematical models

  18. Reconciling subduction dynamics during Tethys closure with large-scale Asian tectonics: Insights from numerical modeling

    NASA Astrophysics Data System (ADS)

    Capitanio, F. A.; Replumaz, A.; Riel, N.

    2015-03-01

    We use three-dimensional numerical models to investigate the relation between subduction dynamics and large-scale tectonics of continent interiors. The models show how the balance between forces at the plate margins such as subduction, ridge push, and far-field forces, controls the coupled plate margins and interiors evolution. Removal of part of the slab by lithospheric break-off during subduction destabilizes the convergent margin, forcing migration of the subduction zone, whereas in the upper plate large-scale lateral extrusion, rotations, and back-arc stretching ensue. When external forces are modeled, such as ridge push and far-field forces, indentation increases, with large collisional margin advance and thickening in the upper plate. The balance between margin and external forces leads to similar convergent margin evolutions, whereas major differences occur in the upper plate interiors. Here, three strain regimes are found: large-scale extrusion, extrusion and thickening along the collisional margin, and thickening only, when negligible far-field forces, ridge push, and larger far-field forces, respectively, add to the subduction dynamics. The extrusion tectonics develops a strong asymmetry toward the oceanic margin driven by large-scale subduction, with no need of preexisting heterogeneities in the upper plate. Because the slab break-off perturbation is transient, the ensuing plate tectonics is time-dependent. The modeled deformation and its evolution are remarkably similar to the Cenozoic Asian tectonics, explaining large-scale lithospheric faulting and thickening, and coupling of indentation, extrusion and extension along the Asian convergent margin as a result of large-scale subduction process.

  19. The large-scale landslide risk classification in catchment scale

    NASA Astrophysics Data System (ADS)

    Liu, Che-Hsin; Wu, Tingyeh; Chen, Lien-Kuang; Lin, Sheng-Chi

    2013-04-01

    The landslide disasters caused heavy casualties during Typhoon Morakot, 2009. This disaster is defined as largescale landslide due to the casualty numbers. This event also reflects the survey on large-scale landslide potential is so far insufficient and significant. The large-scale landslide potential analysis provides information about where should be focused on even though it is very difficult to distinguish. Accordingly, the authors intend to investigate the methods used by different countries, such as Hong Kong, Italy, Japan and Switzerland to clarify the assessment methodology. The objects include the place with susceptibility of rock slide and dip slope and the major landslide areas defined from historical records. Three different levels of scales are confirmed necessarily from country to slopeland, which are basin, catchment, and slope scales. Totally ten spots were classified with high large-scale landslide potential in the basin scale. The authors therefore focused on the catchment scale and employ risk matrix to classify the potential in this paper. The protected objects and large-scale landslide susceptibility ratio are two main indexes to classify the large-scale landslide risk. The protected objects are the constructions and transportation facilities. The large-scale landslide susceptibility ratio is based on the data of major landslide area and dip slope and rock slide areas. Totally 1,040 catchments are concerned and are classified into three levels, which are high, medium, and low levels. The proportions of high, medium, and low levels are 11%, 51%, and 38%, individually. This result represents the catchments with high proportion of protected objects or large-scale landslide susceptibility. The conclusion is made and it be the base material for the slopeland authorities when considering slopeland management and the further investigation.

  20. Unsaturated Hydraulic Conductivity for Evaporation in Large scale Heterogeneous Soils

    NASA Astrophysics Data System (ADS)

    Sun, D.; Zhu, J.

    2014-12-01

    In this study we aim to provide some practical guidelines of how the commonly used simple averaging schemes (arithmetic, geometric, or harmonic mean) perform in simulating large scale evaporation in a large scale heterogeneous landscape. Previous studies on hydraulic property upscaling focusing on steady state flux exchanges illustrated that an effective hydraulic property is usually more difficult to define for evaporation. This study focuses on upscaling hydraulic properties of large scale transient evaporation dynamics using the idea of the stream tube approach. Specifically, the two main objectives are: (1) if the three simple averaging schemes (i.e., arithmetic, geometric and harmonic means) of hydraulic parameters are appropriate in representing large scale evaporation processes, and (2) how the applicability of these simple averaging schemes depends on the time scale of evaporation processes in heterogeneous soils. Multiple realizations of local evaporation processes are carried out using HYDRUS-1D computational code (Simunek et al, 1998). The three averaging schemes of soil hydraulic parameters were used to simulate the cumulative flux exchange, which is then compared with the large scale average cumulative flux. The sensitivity of the relative errors to the time frame of evaporation processes is also discussed.

  1. EINSTEIN'S SIGNATURE IN COSMOLOGICAL LARGE-SCALE STRUCTURE

    SciTech Connect

    Bruni, Marco; Hidalgo, Juan Carlos; Wands, David

    2014-10-10

    We show how the nonlinearity of general relativity generates a characteristic nonGaussian signal in cosmological large-scale structure that we calculate at all perturbative orders in a large-scale limit. Newtonian gravity and general relativity provide complementary theoretical frameworks for modeling large-scale structure in ΛCDM cosmology; a relativistic approach is essential to determine initial conditions, which can then be used in Newtonian simulations studying the nonlinear evolution of the matter density. Most inflationary models in the very early universe predict an almost Gaussian distribution for the primordial metric perturbation, ζ. However, we argue that it is the Ricci curvature of comoving-orthogonal spatial hypersurfaces, R, that drives structure formation at large scales. We show how the nonlinear relation between the spatial curvature, R, and the metric perturbation, ζ, translates into a specific nonGaussian contribution to the initial comoving matter density that we calculate for the simple case of an initially Gaussian ζ. Our analysis shows the nonlinear signature of Einstein's gravity in large-scale structure.

  2. Large-scale growth of the Plasmodium falciparum malaria parasite in a wave bioreactor.

    PubMed

    Dalton, John P; Demanga, Corine G; Reiling, Sarah J; Wunderlich, Juliane; Eng, Jenny W L; Rohrbach, Petra

    2012-01-01

    We describe methods for the large-scale in vitro culturing of synchronous and asynchronous blood-stage Plasmodium falciparum parasites in sterile disposable plastic bioreactors controlled by wave-induced motion (wave bioreactor). These cultures perform better than static flask cultures in terms of preserving parasite cell cycle synchronicity and reducing the number of multiple-infected erythrocytes. The straight-forward methods described here will facilitate the large scale production of malaria parasites for antigen and organelle isolation and characterisation, for the high throughput screening of compound libraries with whole cells or extracts, and the development of live- or whole-cell malaria vaccines under good manufacturing practice compliant standards. PMID:22326740

  3. Rapid Intelligent Inspection Process Definition for dimensional measurement in advanced manufacturing

    SciTech Connect

    Brown, C.W.

    1993-03-01

    The Rapid Intelligent Inspection Process Definition (RIIPD) project is an industry-led effort to advance computer integrated manufacturing (CIM) systems for the creation and modification of inspection process definitions. The RIIPD project will define, design, develop, and demonstrate an automated tool (i.e., software) to generate inspection process plans and coordinate measuring machine (CMM) inspection programs, as well as produce support information for the dimensional measurement of piece parts. The goal of this project is to make the inspection and part verification function, specifically CMM measurements, a more effective production support tool by reducing inspection process definition flowtime, creating consistent and standard inspections, increasing confidence of measurement results, and capturing inspection expertise. This objective is accomplished through importing STEP geometry definitions, applying solid modeling, incorporating explicit tolerance representations, establishing dimensional inspection,techniques, embedding artificial intelligence techniques, and adhering to the Dimensional Measuring Interface Standard (DMIS) national standard.

  4. Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows

    SciTech Connect

    Boehm, Swen; Elwasif, Wael R; Naughton, III, Thomas J; Vallee, Geoffroy R

    2014-01-01

    High-performance computing (HPC) workloads are increasingly leveraging loosely coupled large scale simula- tions. Unfortunately, most large-scale HPC platforms, including Cray/ALPS environments, are designed for the execution of long-running jobs based on coarse-grained launch capabilities (e.g., one MPI rank per core on all allocated compute nodes). This assumption limits capability-class workload campaigns that require large numbers of discrete or loosely coupled simulations, and where time-to-solution is an untenable pacing issue. This paper describes the challenges related to the support of fine-grained launch capabilities that are necessary for the execution of loosely coupled large scale simulations on Cray/ALPS platforms. More precisely, we present the details of an enhanced runtime system to support this use case, and report on initial results from early testing on systems at Oak Ridge National Laboratory.

  5. Acoustic Studies of the Large Scale Ocean Circulation

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris

    1999-01-01

    Detailed knowledge of ocean circulation and its transport properties is prerequisite to an understanding of the earth's climate and of important biological and chemical cycles. Results from two recent experiments, THETIS-2 in the Western Mediterranean and ATOC in the North Pacific, illustrate the use of ocean acoustic tomography for studies of the large scale circulation. The attraction of acoustic tomography is its ability to sample and average the large-scale oceanic thermal structure, synoptically, along several sections, and at regular intervals. In both studies, the acoustic data are compared to, and then combined with, general circulation models, meteorological analyses, satellite altimetry, and direct measurements from ships. Both studies provide complete regional descriptions of the time-evolving, three-dimensional, large scale circulation, albeit with large uncertainties. The studies raise serious issues about existing ocean observing capability and provide guidelines for future efforts.

  6. A relativistic signature in large-scale structure

    NASA Astrophysics Data System (ADS)

    Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David

    2016-09-01

    In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.

  7. Coupling between convection and large-scale circulation

    NASA Astrophysics Data System (ADS)

    Becker, T.; Stevens, B. B.; Hohenegger, C.

    2014-12-01

    The ultimate drivers of convection - radiation, tropospheric humidity and surface fluxes - are altered both by the large-scale circulation and by convection itself. A quantity to which all drivers of convection contribute is moist static energy, or gross moist stability, respectively. Therefore, a variance analysis of the moist static energy budget in radiative-convective equilibrium helps understanding the interaction of precipitating convection and the large-scale environment. In addition, this method provides insights concerning the impact of convective aggregation on this coupling. As a starting point, the interaction is analyzed with a general circulation model, but a model intercomparison study using a hierarchy of models is planned. Effective coupling parameters will be derived from cloud resolving models and these will in turn be related to assumptions used to parameterize convection in large-scale models.

  8. Large-scale current systems in the dayside Venus ionosphere

    NASA Technical Reports Server (NTRS)

    Luhmann, J. G.; Elphic, R. C.; Brace, L. H.

    1981-01-01

    The occasional observation of large-scale horizontal magnetic fields within the dayside ionosphere of Venus by the flux gate magnetometer on the Pioneer Venus orbiter suggests the presence of large-scale current systems. Using the measured altitude profiles of the magnetic field and the electron density and temperature, together with the previously reported neutral atmosphere density and composition, it is found that the local ionosphere can be described at these times by a simple steady state model which treats the unobserved quantities, such as the electric field, as parameters. When the model is appropriate, the altitude profiles of the ion and electron velocities and the currents along the satellite trajectory can be inferred. These results elucidate the configurations and sources of the ionospheric current systems which produce the observed large-scale magnetic fields, and in particular illustrate the effect of ion-neutral coupling in the determination of the current system at low altitudes.

  9. Do Large-Scale Topological Features Correlate with Flare Properties?

    NASA Astrophysics Data System (ADS)

    DeRosa, Marc L.; Barnes, Graham

    2016-05-01

    In this study, we aim to identify whether the presence or absence of particular topological features in the large-scale coronal magnetic field are correlated with whether a flare is confined or eruptive. To this end, we first determine the locations of null points, spine lines, and separatrix surfaces within the potential fields associated with the locations of several strong flares from the current and previous sunspot cycles. We then validate the topological skeletons against large-scale features in observations, such as the locations of streamers and pseudostreamers in coronagraph images. Finally, we characterize the topological environment in the vicinity of the flaring active regions and identify the trends involving their large-scale topologies and the properties of the associated flares.

  10. Isotope Separation and Advanced Manufacturing Technology. ISAM semiannual report, Volume 3, Number 1, October 1993--March 1994

    SciTech Connect

    Carpenter, J.; Kan, T.

    1994-10-01

    This is the fourth issue of a semiannual report for the Isotope Separation and Advanced Materials Manufacturing (ISAM) Technology Program at Lawrence Livermore National Laboratory. Primary objectives include: (I) the Uranium Atomic Vapor Laser Isotope Separation (UAVLIS) process, which is being developed and prepared for deployment as an advanced uranium enrichment capability; (II) Advanced manufacturing technologies, which include industrial laser and E-beam material processing and new manufacturing technologies for uranium, plutonium, and other strategically important materials in support of DOE and other national applications. This report features progress in the ISAM Program from October 1993 through March 1994. Selected papers were indexed separately for inclusion in the Energy Science and Technology Database.

  11. Applications of Parallel Process HiMAP for Large Scale Multidisciplinary Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Potsdam, Mark; Rodriguez, David; Kwak, Dochay (Technical Monitor)

    2000-01-01

    HiMAP is a three level parallel middleware that can be interfaced to a large scale global design environment for code independent, multidisciplinary analysis using high fidelity equations. Aerospace technology needs are rapidly changing. Computational tools compatible with the requirements of national programs such as space transportation are needed. Conventional computation tools are inadequate for modern aerospace design needs. Advanced, modular computational tools are needed, such as those that incorporate the technology of massively parallel processors (MPP).

  12. Advanced Manufacturing at the Marshall Space Flight Center and Application to Ares I and Ares V Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Carruth, Ralph

    2008-01-01

    There are various aspects of advanced manufacturing technology development at the field centers of the National Aeronautics and Space Administration (NASA). The Marshall Space Flight Center (MSFC) has been given the assignment to lead the National Center for Advanced Manufacturing (NCAM) at MSFC and pursue advanced development and coordination with other federal agencies for NASA. There are significant activities at the Marshall Center as well as at the Michoud Assembly Facility (MAF) in New Orleans which we operate in conjunction with the University of New Orleans. New manufacturing processes in metals processing, component development, welding operations, composite manufacturing and thermal protection system material and process development will be utilized in the manufacturing of the United States two new launch vehicles, the Ares I and the Ares V. An overview of NCAM will be presented as well as some of the development activities and manufacturing that are ongoing in Ares Upper Stage development. Some of the tools and equipment produced by Italian owned companies and their application in this work will be mentioned.

  13. Magnetic Helicity and Large Scale Magnetic Fields: A Primer

    NASA Astrophysics Data System (ADS)

    Blackman, Eric G.

    2015-05-01

    Magnetic fields of laboratory, planetary, stellar, and galactic plasmas commonly exhibit significant order on large temporal or spatial scales compared to the otherwise random motions within the hosting system. Such ordered fields can be measured in the case of planets, stars, and galaxies, or inferred indirectly by the action of their dynamical influence, such as jets. Whether large scale fields are amplified in situ or a remnant from previous stages of an object's history is often debated for objects without a definitive magnetic activity cycle. Magnetic helicity, a measure of twist and linkage of magnetic field lines, is a unifying tool for understanding large scale field evolution for both mechanisms of origin. Its importance stems from its two basic properties: (1) magnetic helicity is typically better conserved than magnetic energy; and (2) the magnetic energy associated with a fixed amount of magnetic helicity is minimized when the system relaxes this helical structure to the largest scale available. Here I discuss how magnetic helicity has come to help us understand the saturation of and sustenance of large scale dynamos, the need for either local or global helicity fluxes to avoid dynamo quenching, and the associated observational consequences. I also discuss how magnetic helicity acts as a hindrance to turbulent diffusion of large scale fields, and thus a helper for fossil remnant large scale field origin models in some contexts. I briefly discuss the connection between large scale fields and accretion disk theory as well. The goal here is to provide a conceptual primer to help the reader efficiently penetrate the literature.

  14. The Evolution of Baryons in Cosmic Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Snedden, Ali; Arielle Phillips, Lara; Mathews, Grant James; Coughlin, Jared; Suh, In-Saeng; Bhattacharya, Aparna

    2015-01-01

    The environments of galaxies play a critical role in their formation and evolution. We study these environments using cosmological simulations with star formation and supernova feedback included. From these simulations, we parse the large scale structure into clusters, filaments and voids using a segmentation algorithm adapted from medical imaging. We trace the star formation history, gas phase and metal evolution of the baryons in the intergalactic medium as function of structure. We find that our algorithm reproduces the baryon fraction in the intracluster medium and that the majority of star formation occurs in cold, dense filaments. We present the consequences this large scale environment has for galactic halos and galaxy evolution.

  15. Corridors Increase Plant Species Richness at Large Scales

    SciTech Connect

    Damschen, Ellen I.; Haddad, Nick M.; Orrock,John L.; Tewksbury, Joshua J.; Levey, Douglas J.

    2006-09-01

    Habitat fragmentation is one of the largest threats to biodiversity. Landscape corridors, which are hypothesized to reduce the negative consequences of fragmentation, have become common features of ecological management plans worldwide. Despite their popularity, there is little evidence documenting the effectiveness of corridors in preserving biodiversity at large scales. Using a large-scale replicated experiment, we showed that habitat patches connected by corridors retain more native plant species than do isolated patches, that this difference increases over time, and that corridors do not promote invasion by exotic species. Our results support the use of corridors in biodiversity conservation.

  16. Large-scale ER-damper for seismic protection

    NASA Astrophysics Data System (ADS)

    McMahon, Scott; Makris, Nicos

    1997-05-01

    A large scale electrorheological (ER) damper has been designed, constructed, and tested. The damper consists of a main cylinder and a piston rod that pushes an ER-fluid through a number of stationary annular ducts. This damper is a scaled- up version of a prototype ER-damper which has been developed and extensively studied in the past. In this paper, results from comprehensive testing of the large-scale damper are presented, and the proposed theory developed for predicting the damper response is validated.

  17. Clearing and Labeling Techniques for Large-Scale Biological Tissues

    PubMed Central

    Seo, Jinyoung; Choe, Minjin; Kim, Sung-Yon

    2016-01-01

    Clearing and labeling techniques for large-scale biological tissues enable simultaneous extraction of molecular and structural information with minimal disassembly of the sample, facilitating the integration of molecular, cellular and systems biology across different scales. Recent years have witnessed an explosive increase in the number of such methods and their applications, reflecting heightened interest in organ-wide clearing and labeling across many fields of biology and medicine. In this review, we provide an overview and comparison of existing clearing and labeling techniques and discuss challenges and opportunities in the investigations of large-scale biological systems. PMID:27239813

  18. Large-scale liquid scintillation detectors for solar neutrinos

    NASA Astrophysics Data System (ADS)

    Benziger, Jay B.; Calaprice, Frank P.

    2016-04-01

    Large-scale liquid scintillation detectors are capable of providing spectral yields of the low energy solar neutrinos. These detectors require > 100 tons of liquid scintillator with high optical and radiopurity. In this paper requirements for low-energy neutrino detection by liquid scintillation are specified and the procedures to achieve low backgrounds in large-scale liquid scintillation detectors for solar neutrinos are reviewed. The designs, operations and achievements of Borexino, KamLAND and SNO+ in measuring the low-energy solar neutrino fluxes are reviewed.

  19. Contribution of peculiar shear motions to large-scale structure

    NASA Technical Reports Server (NTRS)

    Mueler, Hans-Reinhard; Treumann, Rudolf A.

    1994-01-01

    Self-gravitating shear flow instability simulations in a cold dark matter-dominated expanding Einstein-de Sitter universe have been performed. When the shear flow speed exceeds a certain threshold, self-gravitating Kelvin-Helmoholtz instability occurs, forming density voids and excesses along the shear flow layer which serve as seeds for large-scale structure formation. A possible mechanism for generating shear peculiar motions are velocity fluctuations induced by the density perturbations of the postinflation era. In this scenario, short scales grow earlier than large scales. A model of this kind may contribute to the cellular structure of the luminous mass distribution in the universe.

  20. Clearing and Labeling Techniques for Large-Scale Biological Tissues.

    PubMed

    Seo, Jinyoung; Choe, Minjin; Kim, Sung-Yon

    2016-06-30

    Clearing and labeling techniques for large-scale biological tissues enable simultaneous extraction of molecular and structural information with minimal disassembly of the sample, facilitating the integration of molecular, cellular and systems biology across different scales. Recent years have witnessed an explosive increase in the number of such methods and their applications, reflecting heightened interest in organ-wide clearing and labeling across many fields of biology and medicine. In this review, we provide an overview and comparison of existing clearing and labeling techniques and discuss challenges and opportunities in the investigations of large-scale biological systems. PMID:27239813

  1. Generation of a large-scale barotropic circulation in rotating convection

    NASA Astrophysics Data System (ADS)

    Rubio, Antonio; Julien, Keith; Weiss, Jeffrey

    2012-11-01

    We recently reported on the existence of a slow-growing large scale barotropic mode in DNS of rotating Rayleigh-Benard convection using the non-hydrostatic balanced geostrophic equations (NHBGE) (Julien et al. 2012). Such large scale modes had been previously observed as an inverse cascade in stable layer quasi-geostophic dynamics or via instability mechanisms of thermal Rossby waves occuring in presence of sloping endwalls (i.e quasi-geostrophic beta-convection). In this talk we report on the early time history of this large scale mode and discuss the generating physical mechanism as a ``symmetry-breaking'' forcing function of the barotropic vorticity equation. Impacts of the large scale barotropic mode on the smaller scale baroclinic components of the flow are detailed with a specific emphasis on the changing nature of the heat transport as the barotropic mode evolves. This work was supported by the National Science Foundation under FRG grants DMS-0855010 and DMS-0854841. Computational resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS).

  2. HiQuant: Rapid Postquantification Analysis of Large-Scale MS-Generated Proteomics Data.

    PubMed

    Bryan, Kenneth; Jarboui, Mohamed-Ali; Raso, Cinzia; Bernal-Llinares, Manuel; McCann, Brendan; Rauch, Jens; Boldt, Karsten; Lynn, David J

    2016-06-01

    Recent advances in mass-spectrometry-based proteomics are now facilitating ambitious large-scale investigations of the spatial and temporal dynamics of the proteome; however, the increasing size and complexity of these data sets is overwhelming current downstream computational methods, specifically those that support the postquantification analysis pipeline. Here we present HiQuant, a novel application that enables the design and execution of a postquantification workflow, including common data-processing steps, such as assay normalization and grouping, and experimental replicate quality control and statistical analysis. HiQuant also enables the interpretation of results generated from large-scale data sets by supporting interactive heatmap analysis and also the direct export to Cytoscape and Gephi, two leading network analysis platforms. HiQuant may be run via a user-friendly graphical interface and also supports complete one-touch automation via a command-line mode. We evaluate HiQuant's performance by analyzing a large-scale, complex interactome mapping data set and demonstrate a 200-fold improvement in the execution time over current methods. We also demonstrate HiQuant's general utility by analyzing proteome-wide quantification data generated from both a large-scale public tyrosine kinase siRNA knock-down study and an in-house investigation into the temporal dynamics of the KSR1 and KSR2 interactomes. Download HiQuant, sample data sets, and supporting documentation at http://hiquant.primesdb.eu . PMID:27086506

  3. Latest advances in the manufacturing of 3D rechargeable lithium microbatteries

    NASA Astrophysics Data System (ADS)

    Ferrari, Stefania; Loveridge, Melanie; Beattie, Shane D.; Jahn, Marcus; Dashwood, Richard J.; Bhagat, Rohit

    2015-07-01

    Recent advances in micro- and nano-electromechanical systems (MEMS/NEMS) technology have led to a niche industry of diverse small-scale devices that include microsensors, micromachines and drug-delivery systems. For these devices, there is an urgent need to develop Micro Lithium Ion Batteries (MLIBs) with dimensions on the scale 1-10 mm3 enabling on-board power delivery. Unfortunately, power limitations are inherent in planar 2D cells and only the advent of 3D designs and microarchitectures will lead to a real breakthrough in the microbattery technology. During the last few years, many efforts to optimise MLIBs were discussed in literature, both in the planar and 3D configurations. This review highlights the importance of 3D microarchitectured electrodes to fabricate batteries that can be device-integrated with exceptionally high specific power density coupled with exquisite miniaturisation. A wide literature overview is provided and recent advances in manufacturing routes to 3D-MLIBs comprising materials synthesis, device formulation, device testing are herein discussed. The advent of simple, economic and easily scalable fabrication processes such as 3D printing will have a decisive role in the growing field of micropower sources and microdevices.

  4. Symmetry-guided large-scale shell-model theory

    NASA Astrophysics Data System (ADS)

    Launey, Kristina D.; Dytrych, Tomas; Draayer, Jerry P.

    2016-07-01

    In this review, we present a symmetry-guided strategy that utilizes exact as well as partial symmetries for enabling a deeper understanding of and advancing ab initio studies for determining the microscopic structure of atomic nuclei. These symmetries expose physically relevant degrees of freedom that, for large-scale calculations with QCD-inspired interactions, allow the model space size to be reduced through a very structured selection of the basis states to physically relevant subspaces. This can guide explorations of simple patterns in nuclei and how they emerge from first principles, as well as extensions of the theory beyond current limitations toward heavier nuclei and larger model spaces. This is illustrated for the ab initio symmetry-adapted no-core shell model (SA-NCSM) and two significant underlying symmetries, the symplectic Sp(3 , R) group and its deformation-related SU(3) subgroup. We review the broad scope of nuclei, where these symmetries have been found to play a key role-from the light p-shell systems, such as 6Li, 8B, 8Be, 12C, and 16O, and sd-shell nuclei exemplified by 20Ne, based on first-principle explorations; through the Hoyle state in 12C and enhanced collectivity in intermediate-mass nuclei, within a no-core shell-model perspective; up to strongly deformed species of the rare-earth and actinide regions, as investigated in earlier studies. A complementary picture, driven by symmetries dual to Sp(3 , R) , is also discussed. We briefly review symmetry-guided techniques that prove useful in various nuclear-theory models, such as Elliott model, ab initio SA-NCSM, symplectic model, pseudo- SU(3) and pseudo-symplectic models, ab initio hyperspherical harmonics method, ab initio lattice effective field theory, exact pairing-plus-shell model approaches, and cluster models, including the resonating-group method. Important implications of these approaches that have deepened our understanding of emergent phenomena in nuclei, such as enhanced

  5. Large Scale Computing and Storage Requirements for High Energy Physics

    SciTech Connect

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  6. Improvement of process control using wafer geometry for enhanced manufacturability of advanced semiconductor devices

    NASA Astrophysics Data System (ADS)

    Lee, Honggoo; Lee, Jongsu; Kim, Sang Min; Lee, Changhwan; Han, Sangjun; Kim, Myoungsoo; Kwon, Wontaik; Park, Sung-Ki; Vukkadala, Pradeep; Awasthi, Amartya; Kim, J. H.; Veeraraghavan, Sathish; Choi, DongSub; Huang, Kevin; Dighe, Prasanna; Lee, Cheouljung; Byeon, Jungho; Dey, Soham; Sinha, Jaydeep

    2015-03-01

    Aggressive advancements in semiconductor technology have resulted in integrated chip (IC) manufacturing capability at sub-20nm half-pitch nodes. With this, lithography overlay error budgets are becoming increasingly stringent. The delay in EUV lithography readiness for high volume manufacturing (HVM) and the need for multiple-patterning lithography with 193i technology has further amplified the overlay issue. Thus there exists a need for technologies that can improve overlay errors in HVM. The traditional method for reducing overlay errors predominantly focused on improving lithography scanner printability performance. However, processes outside of the lithography sector known as processinduced overlay errors can contribute significantly to the total overlay at the current requirements. Monitoring and characterizing process-induced overlay has become critical for advanced node patterning. Recently a relatively new technique for overlay control that uses high-resolution wafer geometry measurements has gained significance. In this work we present the implementation of this technique in an IC fabrication environment to monitor wafer geometry changes induced across several points in the process flow, of multiple product layers with critical overlay performance requirement. Several production wafer lots were measured and analyzed on a patterned wafer geometry tool. Changes induced in wafer geometry as a result of wafer processing were related to down-stream overlay error contribution using the analytical in-plane distortion (IPD) calculation model. Through this segmentation, process steps that are major contributors to down-stream overlay were identified. Subsequent process optimization was then isolated to those process steps where maximum benefit might be realized. Root-cause for the within-wafer, wafer-to-wafer, tool-to-tool, and station-to-station variations observed were further investigated using local shape curvature changes - which is directly related to

  7. Moon-based Earth Observation for Large Scale Geoscience Phenomena

    NASA Astrophysics Data System (ADS)

    Guo, Huadong; Liu, Guang; Ding, Yixing

    2016-07-01

    The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.

  8. Assuring Quality in Large-Scale Online Course Development

    ERIC Educational Resources Information Center

    Parscal, Tina; Riemer, Deborah

    2010-01-01

    Student demand for online education requires colleges and universities to rapidly expand the number of courses and programs offered online while maintaining high quality. This paper outlines two universities respective processes to assure quality in large-scale online programs that integrate instructional design, eBook custom publishing, Quality…

  9. Large-scale search for dark-matter axions

    SciTech Connect

    Hagmann, C.A., LLNL; Kinion, D.; Stoeffl, W.; Van Bibber, K.; Daw, E.J.; McBride, J.; Peng, H.; Rosenberg, L.J.; Xin, H.; Laveigne, J.; Sikivie, P.; Sullivan, N.S.; Tanner, D.B.; Moltz, D.M.; Powell, J.; Clarke, J.; Nezrick, F.A.; Turner, M.S.; Golubev, N.A.; Kravchuk, L.V.

    1998-01-01

    Early results from a large-scale search for dark matter axions are presented. In this experiment, axions constituting our dark-matter halo may be resonantly converted to monochromatic microwave photons in a high-Q microwave cavity permeated by a strong magnetic field. Sensitivity at the level of one important axion model (KSVZ) has been demonstrated.

  10. DESIGN OF LARGE-SCALE AIR MONITORING NETWORKS

    EPA Science Inventory

    The potential effects of air pollution on human health have received much attention in recent years. In the U.S. and other countries, there are extensive large-scale monitoring networks designed to collect data to inform the public of exposure risks to air pollution. A major crit...

  11. Over-driven control for large-scale MR dampers

    NASA Astrophysics Data System (ADS)

    Friedman, A. J.; Dyke, S. J.; Phillips, B. M.

    2013-04-01

    As semi-active electro-mechanical control devices increase in scale for use in real-world civil engineering applications, their dynamics become increasingly complicated. Control designs that are able to take these characteristics into account will be more effective in achieving good performance. Large-scale magnetorheological (MR) dampers exhibit a significant time lag in their force-response to voltage inputs, reducing the efficacy of typical controllers designed for smaller scale devices where the lag is negligible. A new control algorithm is presented for large-scale MR devices that uses over-driving and back-driving of the commands to overcome the challenges associated with the dynamics of these large-scale MR dampers. An illustrative numerical example is considered to demonstrate the controller performance. Via simulations of the structure using several seismic ground motions, the merits of the proposed control strategy to achieve reductions in various response parameters are examined and compared against several accepted control algorithms. Experimental evidence is provided to validate the improved capabilities of the proposed controller in achieving the desired control force levels. Through real-time hybrid simulation (RTHS), the proposed controllers are also examined and experimentally evaluated in terms of their efficacy and robust performance. The results demonstrate that the proposed control strategy has superior performance over typical control algorithms when paired with a large-scale MR damper, and is robust for structural control applications.

  12. Large-scale search for dark-matter axions

    SciTech Connect

    Kinion, D; van Bibber, K

    2000-08-30

    We review the status of two ongoing large-scale searches for axions which may constitute the dark matter of our Milky Way halo. The experiments are based on the microwave cavity technique proposed by Sikivie, and marks a ''second-generation'' to the original experiments performed by the Rochester-Brookhaven-Fermilab collaboration, and the University of Florida group.

  13. Large-Scale Innovation and Change in UK Higher Education

    ERIC Educational Resources Information Center

    Brown, Stephen

    2013-01-01

    This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…

  14. Global smoothing and continuation for large-scale molecular optimization

    SciTech Connect

    More, J.J.; Wu, Zhijun

    1995-10-01

    We discuss the formulation of optimization problems that arise in the study of distance geometry, ionic systems, and molecular clusters. We show that continuation techniques based on global smoothing are applicable to these molecular optimization problems, and we outline the issues that must be resolved in the solution of large-scale molecular optimization problems.

  15. The Large-Scale Structure of Scientific Method

    ERIC Educational Resources Information Center

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  16. Individual Skill Differences and Large-Scale Environmental Learning

    ERIC Educational Resources Information Center

    Fields, Alexa W.; Shelton, Amy L.

    2006-01-01

    Spatial skills are known to vary widely among normal individuals. This project was designed to address whether these individual differences are differentially related to large-scale environmental learning from route (ground-level) and survey (aerial) perspectives. Participants learned two virtual environments (route and survey) with limited…

  17. Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround

    ERIC Educational Resources Information Center

    Peurach, Donald J.; Neumerski, Christine M.

    2015-01-01

    The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state education…

  18. Large-scale drift and Rossby wave turbulence

    NASA Astrophysics Data System (ADS)

    Harper, K. L.; Nazarenko, S. V.

    2016-08-01

    We study drift/Rossby wave turbulence described by the large-scale limit of the Charney–Hasegawa–Mima equation. We define the zonal and meridional regions as Z:= \\{{k} :| {k}y| \\gt \\sqrt{3}{k}x\\} and M:= \\{{k} :| {k}y| \\lt \\sqrt{3}{k}x\\} respectively, where {k}=({k}x,{k}y) is in a plane perpendicular to the magnetic field such that k x is along the isopycnals and k y is along the plasma density gradient. We prove that the only types of resonant triads allowed are M≤ftrightarrow M+Z and Z≤ftrightarrow Z+Z. Therefore, if the spectrum of weak large-scale drift/Rossby turbulence is initially in Z it will remain in Z indefinitely. We present a generalised Fjørtoft’s argument to find transfer directions for the quadratic invariants in the two-dimensional {k}-space. Using direct numerical simulations, we test and confirm our theoretical predictions for weak large-scale drift/Rossby turbulence, and establish qualitative differences with cases when turbulence is strong. We demonstrate that the qualitative features of the large-scale limit survive when the typical turbulent scale is only moderately greater than the Larmor/Rossby radius.

  19. Large Scale Field Campaign Contributions to Soil Moisture Remote Sensing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Large-scale field experiments have been an essential component of soil moisture remote sensing for over two decades. They have provided test beds for both the technology and science necessary to develop and refine satellite mission concepts. The high degree of spatial variability of soil moisture an...

  20. Large-scale V/STOL testing. [in wind tunnels

    NASA Technical Reports Server (NTRS)

    Koenig, D. G.; Aiken, T. N.; Aoyagi, K.; Falarski, M. D.

    1977-01-01

    Several facets of large-scale testing of V/STOL aircraft configurations are discussed with particular emphasis on test experience in the Ames 40- by 80-foot wind tunnel. Examples of powered-lift test programs are presented in order to illustrate tradeoffs confronting the planner of V/STOL test programs. It is indicated that large-scale V/STOL wind-tunnel testing can sometimes compete with small-scale testing in the effort required (overall test time) and program costs because of the possibility of conducting a number of different tests with a single large-scale model where several small-scale models would be required. The benefits of both high- and full-scale Reynolds numbers, more detailed configuration simulation, and number and type of onboard measurements increase rapidly with scale. Planning must be more detailed at large scale in order to balance the trade-offs between the increased costs, as number of measurements and model configuration variables increase and the benefits of larger amounts of information coming out of one test.

  1. Current Scientific Issues in Large Scale Atmospheric Dynamics

    NASA Technical Reports Server (NTRS)

    Miller, T. L. (Compiler)

    1986-01-01

    Topics in large scale atmospheric dynamics are discussed. Aspects of atmospheric blocking, the influence of transient baroclinic eddies on planetary-scale waves, cyclogenesis, the effects of orography on planetary scale flow, small scale frontal structure, and simulations of gravity waves in frontal zones are discussed.

  2. Large-Scale Machine Learning for Classification and Search

    ERIC Educational Resources Information Center

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  3. Considerations for Managing Large-Scale Clinical Trials.

    ERIC Educational Resources Information Center

    Tuttle, Waneta C.; And Others

    1989-01-01

    Research management strategies used effectively in a large-scale clinical trial to determine the health effects of exposure to Agent Orange in Vietnam are discussed, including pre-project planning, organization according to strategy, attention to scheduling, a team approach, emphasis on guest relations, cross-training of personnel, and preparing…

  4. Ecosystem resilience despite large-scale altered hydro climatic conditions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Climate change is predicted to increase both drought frequency and duration, and when coupled with substantial warming, will establish a new hydroclimatological paradigm for many regions. Large-scale, warm droughts have recently impacted North America, Africa, Europe, Amazonia, and Australia result...

  5. Probabilistic Cuing in Large-Scale Environmental Search

    ERIC Educational Resources Information Center

    Smith, Alastair D.; Hood, Bruce M.; Gilchrist, Iain D.

    2010-01-01

    Finding an object in our environment is an important human ability that also represents a critical component of human foraging behavior. One type of information that aids efficient large-scale search is the likelihood of the object being in one location over another. In this study we investigated the conditions under which individuals respond to…

  6. Extracting Useful Semantic Information from Large Scale Corpora of Text

    ERIC Educational Resources Information Center

    Mendoza, Ray Padilla, Jr.

    2012-01-01

    Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…

  7. Efficient On-Demand Operations in Large-Scale Infrastructures

    ERIC Educational Resources Information Center

    Ko, Steven Y.

    2009-01-01

    In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…

  8. Large-Scale Environmental Influences on Aquatic Animal Health

    EPA Science Inventory

    In the latter portion of the 20th century, North America experienced numerous large-scale mortality events affecting a broad diversity of aquatic animals. Short-term forensic investigations of these events have sometimes characterized a causative agent or condition, but have rare...

  9. Newton iterative methods for large scale nonlinear systems

    SciTech Connect

    Walker, H.F.; Turner, K.

    1993-01-01

    Objective is to develop robust, efficient Newton iterative methods for general large scale problems well suited for discretizations of partial differential equations, integral equations, and other continuous problems. A concomitant objective is to develop improved iterative linear algebra methods. We first outline research on Newton iterative methods and then review work on iterative linear algebra methods. (DLC)

  10. Implicit solution of large-scale radiation diffusion problems

    SciTech Connect

    Brown, P N; Graziani, F; Otero, I; Woodward, C S

    2001-01-04

    In this paper, we present an efficient solution approach for fully implicit, large-scale, nonlinear radiation diffusion problems. The fully implicit approach is compared to a semi-implicit solution method. Accuracy and efficiency are shown to be better for the fully implicit method on both one- and three-dimensional problems with tabular opacities taken from the LEOS opacity library.

  11. Resilience of Florida Keys coral communities following large scale disturbances

    EPA Science Inventory

    The decline of coral reefs in the Caribbean over the last 40 years has been attributed to multiple chronic stressors and episodic large-scale disturbances. This study assessed the resilience of coral communities in two different regions of the Florida Keys reef system between 199...

  12. Large Scale Survey Data in Career Development Research

    ERIC Educational Resources Information Center

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  13. Polymers in 2D Turbulence: Suppression of Large Scale Fluctuations

    NASA Astrophysics Data System (ADS)

    Amarouchene, Y.; Kellay, H.

    2002-08-01

    Small quantities of a long chain molecule or polymer affect two-dimensional turbulence in unexpected ways. Their presence inhibits the transfers of energy to large scales causing their suppression in the energy density spectrum. This also leads to the change of the spectral properties of a passive scalar which turns out to be highly sensitive to the presence of energy transfers.

  14. Creating a Large-Scale, Third Generation, Distance Education Course.

    ERIC Educational Resources Information Center

    Weller, Martin James

    2000-01-01

    Outlines the course development of an introductory large-scale distance education course offered via the World Wide Web at the Open University in the United Kingdom. Topics include developing appropriate student skills; maintaining quality control; facilitating easy updating of material; ensuring student interaction; and making materials…

  15. Cosmic strings and the large-scale structure

    NASA Technical Reports Server (NTRS)

    Stebbins, Albert

    1988-01-01

    A possible problem for cosmic string models of galaxy formation is presented. If very large voids are common and if loop fragmentation is not much more efficient than presently believed, then it may be impossible for string scenarios to produce the observed large-scale structure with Omega sub 0 = 1 and without strong environmental biasing.

  16. International Large-Scale Assessments: What Uses, What Consequences?

    ERIC Educational Resources Information Center

    Johansson, Stefan

    2016-01-01

    Background: International large-scale assessments (ILSAs) are a much-debated phenomenon in education. Increasingly, their outcomes attract considerable media attention and influence educational policies in many jurisdictions worldwide. The relevance, uses and consequences of these assessments are often the focus of research scrutiny. Whilst some…

  17. Measurement, Sampling, and Equating Errors in Large-Scale Assessments

    ERIC Educational Resources Information Center

    Wu, Margaret

    2010-01-01

    In large-scale assessments, such as state-wide testing programs, national sample-based assessments, and international comparative studies, there are many steps involved in the measurement and reporting of student achievement. There are always sources of inaccuracies in each of the steps. It is of interest to identify the source and magnitude of…

  18. A bibliographical surveys of large-scale systems

    NASA Technical Reports Server (NTRS)

    Corliss, W. R.

    1970-01-01

    A limited, partly annotated bibliography was prepared on the subject of large-scale system control. Approximately 400 references are divided into thirteen application areas, such as large societal systems and large communication systems. A first-author index is provided.

  19. Large-Scale Networked Virtual Environments: Architecture and Applications

    ERIC Educational Resources Information Center

    Lamotte, Wim; Quax, Peter; Flerackers, Eddy

    2008-01-01

    Purpose: Scalability is an important research topic in the context of networked virtual environments (NVEs). This paper aims to describe the ALVIC (Architecture for Large-scale Virtual Interactive Communities) approach to NVE scalability. Design/methodology/approach: The setup and results from two case studies are shown: a 3-D learning environment…

  20. Manufacturing of 100mm diameter GaSb substrates for advanced space based applications

    NASA Astrophysics Data System (ADS)

    Allen, L. P.; Flint, J. P.; Meshew, G.; Trevethan, J.; Dallas, G.; Khoshakhlagh, A.; Hill, C. J.

    2012-01-01

    Engineered substrates such as large diameter (100mm) GaSb wafers need to be ready years in advance of any major shift in DoD and commercial technology, and typically before much of the rest of the materials and equipment for fabricating next generation devices. Antimony based III-V semiconductors are of significant interest for advanced applications in optoelectronics, high speed transistors, microwave devices, and photovoltaics. GaSb demand is increasing due to its lattice parameter matching of various ternary and quaternary III-V compounds, as their bandgaps can be engineered to cover a wide spectral range. For these stealth and spaced based applications, larger format IRFPAs benefit clearly from next generation starting substrates. In this study, we have manufactured and tested 100mm GaSb substrates. This paper describes the characterization process that provides the best possible GaSb material for advanced IRFPA and SLS epi growth. The analysis of substrate by AFM surface roughness, particles, haze, GaSb oxide character and desorption using XPS, flatness measurements, and SLS based epitaxy quality are shown. By implementing subtle changes in our substrate processing, we show that a Sb-oxide rich surface is routinely provided for rapid desorption. Post-MBE CBIRD structures on the 100mm ULD GaSb were examined and reveals a high intensity, 6.6nm periodicity, low (15.48 arcsec) FWHM peak distribution that suggests low surface strain and excellent lattice matching. The Ra for GaSb is a consistent ~0.2-4nm, with average batch wafer warp of ~4 μm to provide a clean, flat GaSb template critical for next generation epi growth.

  1. Response of Tradewind Cumuli to Large-Scale Processes.

    NASA Astrophysics Data System (ADS)

    Soong, S.-T.; Ogura, Y.

    1980-09-01

    The two-dimensional slab-symmetric numerical cloud model used by Soong and Ogura (1973) for studying the evolution of an isolated cumulus cloud is extended to investigate the statistical properties of cumulus clouds which would be generated under a given large-scale forcing composed of the horizontal advection of temperature and water vapor mixing ratio, vertical velocity, sea surface temperature and radiative cooling. Random disturbances of small amplitude are introduced into the model at low levels to provide random motion for cloud formation.The model is applied to a case of suppressed weather conditions during BOMEX for the period 22-23 June 1969 when a nearly steady state prevailed. The composited temperature and mixing ratio profiles of these two days are used as initial conditions and the time-independent large-scale forcing terms estimated from the observations are applied to the model. The result of numerical integration shows that a number of small clouds start developing after 1 h. Some of them decay quickly, but some of them develop and reach the tradewind inversion. After a few hours of simulation, the vertical profiles of the horizontally averaged temperature and moisture are found to deviate only slightly from the observed profiles, indicating that the large-scale effect and the feedback effects of clouds on temperature and mixing ratio reach an equilibrium state. The three major components of the cloud feedback effect, i.e., condensation, evaporation and vertical fluxes associated with the clouds, are determined from the model output. The vertical profiles of vertical heat and moisture fluxes in the subcloud layer in the model are found to be in general agreement with the observations.Sensitivity tests of the model are made for different magnitudes of the large-scale vertical velocity. The most striking result is that the temperature and humidity in the cloud layer below the inversion do not change significantly in spite of a relatively large

  2. Lightweighting Automotive Materials for Increased Fuel Efficiency and Delivering Advanced Modeling and Simulation Capabilities to U.S. Manufacturers

    SciTech Connect

    Hale, Steve

    2013-09-11

    Abstract The National Center for Manufacturing Sciences (NCMS) worked with the U.S. Department of Energy (DOE), National Energy Technology Laboratory (NETL), to bring together research and development (R&D) collaborations to develop and accelerate the knowledgebase and infrastructure for lightweighting materials and manufacturing processes for their use in structural and applications in the automotive sector. The purpose/importance of this DOE program: • 2016 CAFÉ standards. • Automotive industry technology that shall adopt the insertion of lightweighting material concepts towards manufacturing of production vehicles. • Development and manufacture of advanced research tools for modeling and simulation (M&S) applications to reduce manufacturing and material costs. • U.S. competitiveness that will help drive the development and manufacture of the next generation of materials. NCMS established a focused portfolio of applied R&D projects utilizing lightweighting materials for manufacture into automotive structures and components. Areas that were targeted in this program: • Functionality of new lightweighting materials to meet present safety requirements. • Manufacturability using new lightweighting materials. • Cost reduction for the development and use of new lightweighting materials. The automotive industry’s future continuously evolves through innovation, and lightweight materials are key in achieving a new era of lighter, more efficient vehicles. Lightweight materials are among the technical advances needed to achieve fuel/energy efficiency and reduce carbon dioxide (CO2) emissions: • Establish design criteria methodology to identify the best materials for lightweighting. • Employ state-of-the-art design tools for optimum material development for their specific applications. • Match new manufacturing technology to production volume. • Address new process variability with new production-ready processes.

  3. Formation of large-scale flexible transparent conductive films using evaporative migration characteristics of Au nanoparticles.

    PubMed

    Higashitani, Ko; McNamee, Cathy E; Nakayama, Masaki

    2011-03-15

    To sustain the growing demand of transparent conductive films for wide applications, such as flat panel displays, a much more cost-effective film is required over the widely used indium tin oxide film. Here we developed a promising method to manufacture a cost-effective flexible transparent conductive film of high performance by first making grid-iron patterns of thin lines on a large scale using evaporative migration characteristics of gold nanoparticles, and then by burying the grid-iron pattern into a poly(ethylene terephthalate) film. PMID:21265505

  4. Development status of solid polymer electrolyte water electrolysis for large scale hydrogen generation

    NASA Astrophysics Data System (ADS)

    Russell, J. H.

    1981-03-01

    Solid polymer water electrolysis technology for large scale hydrogen generation is reviewed. A hydrogen generator module, capable of producing 2000 SCFH, was operated successfully for over 700 hours in the 200 kW system. Test results and further information are presented. Technology development was continued in support of improving both capital cost and conversion efficiency. Progress made in the development of the 10 sq ft active area cell included completion of the initial design, the beginning of fabrication development, and installation of new facilities for cell manufacture.

  5. A survey on routing protocols for large-scale wireless sensor networks.

    PubMed

    Li, Changle; Zhang, Hanxiao; Hao, Binbin; Li, Jiandong

    2011-01-01

    With the advances in micro-electronics, wireless sensor devices have been made much smaller and more integrated, and large-scale wireless sensor networks (WSNs) based the cooperation among the significant amount of nodes have become a hot topic. "Large-scale" means mainly large area or high density of a network. Accordingly the routing protocols must scale well to the network scope extension and node density increases. A sensor node is normally energy-limited and cannot be recharged, and thus its energy consumption has a quite significant effect on the scalability of the protocol. To the best of our knowledge, currently the mainstream methods to solve the energy problem in large-scale WSNs are the hierarchical routing protocols. In a hierarchical routing protocol, all the nodes are divided into several groups with different assignment levels. The nodes within the high level are responsible for data aggregation and management work, and the low level nodes for sensing their surroundings and collecting information. The hierarchical routing protocols are proved to be more energy-efficient than flat ones in which all the nodes play the same role, especially in terms of the data aggregation and the flooding of the control packets. With focus on the hierarchical structure, in this paper we provide an insight into routing protocols designed specifically for large-scale WSNs. According to the different objectives, the protocols are generally classified based on different criteria such as control overhead reduction, energy consumption mitigation and energy balance. In order to gain a comprehensive understanding of each protocol, we highlight their innovative ideas, describe the underlying principles in detail and analyze their advantages and disadvantages. Moreover a comparison of each routing protocol is conducted to demonstrate the differences between the protocols in terms of message complexity, memory requirements, localization, data aggregation, clustering manner and

  6. ROBOTICALLY ENHANCED ADVANCED MANUFACTURING CONCEPTS TO OPTIMIZE ENERGY, PRODUCTIVITY, AND ENVIRONMENTAL PERFORMANCE

    SciTech Connect

    Larry L. Keller; Joseph M. Pack; Robert V. Kolarik II

    2007-11-05

    In the first phase of the REML project, major assets were acquired for a manufacturing line for follow-on installation, capability studies and optimization. That activity has been documented in the DE-FC36-99ID13819 final report. In this the second phase of the REML project, most of the major assets have been installed in a manufacturing line arrangement featuring a green cell, a thermal treatment cell and a finishing cell. Most of the secondary and support assets have been acquired and installed. Assets have been integrated with a commercial, machine-tending gantry robot in the thermal treatment cell and with a low-mass, high-speed gantry robot in the finish cell. Capabilities for masterless gauging of product’s dimensional and form characteristics were advanced. Trial production runs across the entire REML line have been undertaken. Discrete event simulation modeling has aided in line balancing and reduction of flow time. Energy, productivity and cost, and environmental comparisons to baselines have been made. Energy The REML line in its current state of development has been measured to be about 22% (338,000 kVA-hrs) less energy intensive than the baseline conventional low volume line assuming equivalent annual production volume of approximately 51,000 races. The reduction in energy consumption is largely attributable to the energy reduction in the REML thermal treatment cell where the heating devices are energized on demand and are appropriately sized to the heating load of a near single piece flow line. If additional steps such as power factor correction and use of high-efficiency motors were implemented to further reduce energy consumption, it is estimated, but not yet demonstrated, that the REML line would be about 30% less energy intensive than the baseline conventional low volume line assuming equivalent annual production volume. Productivity The capital cost of an REML line would be roughly equivalent to the capital cost of a new conventional line. The

  7. Iterative methods for large scale nonlinear and linear systems. Final report, 1994--1996

    SciTech Connect

    Walker, H.F.

    1997-09-01

    The major goal of this research has been to develop improved numerical methods for the solution of large-scale systems of linear and nonlinear equations, such as occur almost ubiquitously in the computational modeling of physical phenomena. The numerical methods of central interest have been Krylov subspace methods for linear systems, which have enjoyed great success in many large-scale applications, and newton-Krylov methods for nonlinear problems, which use Krylov subspace methods to solve approximately the linear systems that characterize Newton steps. Krylov subspace methods have undergone a remarkable development over the last decade or so and are now very widely used for the iterative solution of large-scale linear systems, particularly those that arise in the discretization of partial differential equations (PDEs) that occur in computational modeling. Newton-Krylov methods have enjoyed parallel success and are currently used in many nonlinear applications of great scientific and industrial importance. In addition to their effectiveness on important problems, Newton-Krylov methods also offer a nonlinear framework within which to transfer to the nonlinear setting any advances in Krylov subspace methods or preconditioning techniques, or new algorithms that exploit advanced machine architectures. This research has resulted in a number of improved Krylov and Newton-Krylov algorithms together with applications of these to important linear and nonlinear problems.

  8. Development of advanced manufacturing technologies for low cost hydrogen storage vessels

    SciTech Connect

    Leavitt, Mark; Lam, Patrick

    2014-12-29

    The U.S. Department of Energy (DOE) defined a need for low-cost gaseous hydrogen storage vessels at 700 bar to support cost goals aimed at 500,000 units per year. Existing filament winding processes produce a pressure vessel that is structurally inefficient, requiring more carbon fiber for manufacturing reasons, than would otherwise be necessary. Carbon fiber is the greatest cost driver in building a hydrogen pressure vessel. The objective of this project is to develop new methods for manufacturing Type IV pressure vessels for hydrogen storage with the purpose of lowering the overall product cost through an innovative hybrid process of optimizing composite usage by combining traditional filament winding (FW) and advanced fiber placement (AFP) techniques. A numbers of vessels were manufactured in this project. The latest vessel design passed all the critical tests on the hybrid design per European Commission (EC) 79-2009 standard except the extreme temperature cycle test. The tests passed include burst test, cycle test, accelerated stress rupture test and drop test. It was discovered the location where AFP and FW overlap for load transfer could be weakened during hydraulic cycling at 85°C. To design a vessel that passed these tests, the in-house modeling software was updated to add capability to start and stop fiber layers to simulate the AFP process. The original in-house software was developed for filament winding only. Alternative fiber was also investigated in this project, but the added mass impacted the vessel cost negatively due to the lower performance from the alternative fiber. Overall the project was a success to show the hybrid design is a viable solution to reduce fiber usage, thus driving down the cost of fuel storage vessels. Based on DOE’s baseline vessel size of 147.3L and 91kg, the 129L vessel (scaled to DOE baseline) in this project shows a 32% composite savings and 20% cost savings when comparing Vessel 15 hybrid design and the Quantum

  9. Chrysler Partners with North Lake High School in an Advanced Manufacturing Technology Program for Special Needs Students.

    ERIC Educational Resources Information Center

    Karbon, Patrick J.; Kuhn, Cynthia

    1996-01-01

    Chrysler Corporation and North Lake High School cooperated to develop and deploy Advanced Manufacturing Technology for high school students identified as at risk or hard to serve. Chrysler provided curriculum that was delivered by training center instructors; teachers ensured student competence in academic areas. (JOW)

  10. Large-scale quantification of CVD graphene surface coverage

    NASA Astrophysics Data System (ADS)

    Ambrosi, Adriano; Bonanni, Alessandra; Sofer, Zdeněk; Pumera, Martin

    2013-02-01

    The extraordinary properties demonstrated for graphene and graphene-related materials can be fully exploited when a large-scale fabrication procedure is made available. Chemical vapor deposition (CVD) of graphene on Cu and Ni substrates is one of the most promising procedures to synthesize large-area and good quality graphene films. Parallel to the fabrication process, a large-scale quality monitoring technique is equally crucial. We demonstrate here a rapid and simple methodology that is able to probe the effectiveness of the growth process over a large substrate area for both Ni and Cu substrates. This method is based on inherent electrochemical signals generated by the underlying metal catalysts when fractures or discontinuities of the graphene film are present. The method can be applied immediately after the CVD growth process without the need for any graphene transfer step and represents a powerful quality monitoring technique for the assessment of large-scale fabrication of graphene by the CVD process.The extraordinary properties demonstrated for graphene and graphene-related materials can be fully exploited when a large-scale fabrication procedure is made available. Chemical vapor deposition (CVD) of graphene on Cu and Ni substrates is one of the most promising procedures to synthesize large-area and good quality graphene films. Parallel to the fabrication process, a large-scale quality monitoring technique is equally crucial. We demonstrate here a rapid and simple methodology that is able to probe the effectiveness of the growth process over a large substrate area for both Ni and Cu substrates. This method is based on inherent electrochemical signals generated by the underlying metal catalysts when fractures or discontinuities of the graphene film are present. The method can be applied immediately after the CVD growth process without the need for any graphene transfer step and represents a powerful quality monitoring technique for the assessment of large-scale

  11. Final Report - Advanced MEA's for Enhanced Operating Conditions, Amenable to High Volume Manufacture

    SciTech Connect

    Debe, Mark K.

    2007-09-30

    This report summarizes the work completed under a 3M/DOE contract directed at advancing the key fuel cell (FC) components most critical for overcoming the polymer electrolyte membrane fuel cell (PEMFC) performance, durability & cost barriers. This contract focused on the development of advanced ion exchange membranes & electrocatalysts for PEMFCs that will enable operation under ever more demanding automotive operating conditions & the use high volume compatible processes for their manufacture. Higher performing & more durable electrocatalysts must be developed for PEMFCs to meet the power density & lifetime hours required for FC vehicles. At the same time the amount of expensive Pt catalyst must be reduced to lower the MEA costs. While these two properties are met, the catalyst must be made resistant to multiple degradation mechanisms to reach necessary operating lifetimes. In this report, we present the work focused on the development of a completely new approach to PEMFC electrocatalyts, called nanostructured thin film (NSTF) catalysts. The carbon black supports are eliminated with this new approach which eliminates the carbon corrosion issue. The thin film nature of the catalyst significantly improves its robustness against dissolution & grain growth, preserving the surface area. Also, the activity of the NSTF for oxygen reduction is improved by over 500% compared to dispersed Pt catalyts. Finally, the process for fabricating the NSTF catalysts is consistent with high volume roll-good manufacturing & extremely flexible towards the introduction of new catalyst compositions & structures. This report documents the work done to develop new multi-element NSTF catalysts with properties that exceed pure Pt, that are optimized for use with the membranes discussed below, & advance the state-of-the-art towards meeting the DOE 2010 targets for PEMFC electrocatalysts. The work completed advances the understanding of the NSTF catalyst technology, identifies new NSTF

  12. Report to the President on Ensuring American Leadership in Advanced Manufacturing

    ERIC Educational Resources Information Center

    Anderson, Alan

    2011-01-01

    The United States has long thrived as a result of its ability to manufacture goods and sell them to global markets. Manufacturing activity has supported its economic growth, leading the Nation's exports and employing millions of Americans. The manufacturing sector has also driven knowledge production and innovation in the United States, by…

  13. Implementation strategy of wafer-plane and aerial-plane inspection for advanced mask manufacture

    NASA Astrophysics Data System (ADS)

    Kim, Won-Sun; Chung, Dong-Hoon; Jeon, Chan-Uk; Cho, HanKu; Huang, William; Miller, John; Inderhees, Gregg; Pinto, Becky; Hur, Jiuk; Park, Kihun; Han, Jay

    2009-04-01

    Inspection of aggressive Optical Proximity Correction (OPC) designs, improvement of usable sensitivity, and reduction of cost of ownership are the three major challenges for today's mask inspection methodologies. In this paper we will discuss using aerial-plane inspection and wafer-plane inspection as novel approaches to address these challenges for advanced reticles. Wafer-plane inspection (WPI) and aerial-plane inspection (API) are two lithographic inspection modes. This suite of new inspection modes is based on high resolution reflected and transmitted light images in the reticle plane. These images together with scanner parameters are used to generate the aerial plane image using either vector or scalar models. Then information about the resist is applied to complete construction of the wafer plane image. API reports defects based on intensity differences between test and reference images at the aerial plane, whereas WPI applies a resist model to the aerial image to enhance discrimination between printable and non-printable defects at the wafer plane. The combination of WPI and API along with the industry standard Reticle Plane Inspection (RPI) is designed to handle complex OPC features, improve usable sensitivity and reduce the cost of ownership. This paper will explore the application of aerial-plane and wafer-plane die-to-die inspections on advanced reticles. Inspection sensitivity, inspectability, and comparison with Aerial Imaging Measurement System (AIMSTM[1]) or wafer-print-line will be analyzed. Most importantly, the implementation strategy of a combination of WPI and API along with RPI leading-edge mask manufacturing will be discussed.

  14. LARGE-SCALE MOTIONS IN THE PERSEUS GALAXY CLUSTER

    SciTech Connect

    Simionescu, A.; Werner, N.; Urban, O.; Allen, S. W.; Fabian, A. C.; Sanders, J. S.; Mantz, A.; Nulsen, P. E. J.; Takei, Y.

    2012-10-01

    By combining large-scale mosaics of ROSAT PSPC, XMM-Newton, and Suzaku X-ray observations, we present evidence for large-scale motions in the intracluster medium of the nearby, X-ray bright Perseus Cluster. These motions are suggested by several alternating and interleaved X-ray bright, low-temperature, low-entropy arcs located along the east-west axis, at radii ranging from {approx}10 kpc to over a Mpc. Thermodynamic features qualitatively similar to these have previously been observed in the centers of cool-core clusters, and were successfully modeled as a consequence of the gas sloshing/swirling motions induced by minor mergers. Our observations indicate that such sloshing/swirling can extend out to larger radii than previously thought, on scales approaching the virial radius.

  15. Generating Large-Scale Longitudinal Data Resources for Aging Research

    PubMed Central

    Hofer, Scott M.

    2011-01-01

    Objectives. The need for large studies and the types of large-scale data resources (LSDRs) are discussed along with their general scientific utility, role in aging research, and affordability. The diversification of approaches to large-scale data resourcing is described in order to facilitate their use in aging research. Methods. The need for LSDRs is discussed in terms of (a) large sample size; (b) longitudinal design; (c) as platforms for additional investigator-initiated research projects; and (d) broad-based access to core genetic, biological, and phenotypic data. Discussion. It is concluded that a “lite-touch, lo-tech, lo-cost” approach to LSDRs is a viable strategy for the development of LSDRs and would enhance the likelihood of LSDRs being established which are dedicated to the wide range of important aging-related issues. PMID:21743049

  16. Lagrangian space consistency relation for large scale structure

    NASA Astrophysics Data System (ADS)

    Horn, Bart; Hui, Lam; Xiao, Xiao

    2015-09-01

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias & Riotto and Peloso & Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.

  17. Instrumentation Development for Large Scale Hypersonic Inflatable Aerodynamic Decelerator Characterization

    NASA Technical Reports Server (NTRS)

    Swanson, Gregory T.; Cassell, Alan M.

    2011-01-01

    Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology is currently being considered for multiple atmospheric entry applications as the limitations of traditional entry vehicles have been reached. The Inflatable Re-entry Vehicle Experiment (IRVE) has successfully demonstrated this technology as a viable candidate with a 3.0 m diameter vehicle sub-orbital flight. To further this technology, large scale HIADs (6.0 8.5 m) must be developed and tested. To characterize the performance of large scale HIAD technology new instrumentation concepts must be developed to accommodate the flexible nature inflatable aeroshell. Many of the concepts that are under consideration for the HIAD FY12 subsonic wind tunnel test series are discussed below.

  18. The Large Scale Synthesis of Aligned Plate Nanostructures.

    PubMed

    Zhou, Yang; Nash, Philip; Liu, Tian; Zhao, Naiqin; Zhu, Shengli

    2016-01-01

    We propose a novel technique for the large-scale synthesis of aligned-plate nanostructures that are self-assembled and self-supporting. The synthesis technique involves developing nanoscale two-phase microstructures through discontinuous precipitation followed by selective etching to remove one of the phases. The method may be applied to any alloy system in which the discontinuous precipitation transformation goes to completion. The resulting structure may have many applications in catalysis, filtering and thermal management depending on the phase selection and added functionality through chemical reaction with the retained phase. The synthesis technique is demonstrated using the discontinuous precipitation of a γ' phase, (Ni, Co)3Al, followed by selective dissolution of the γ matrix phase. The production of the nanostructure requires heat treatments on the order of minutes and can be performed on a large scale making this synthesis technique of great economic potential. PMID:27439672

  19. Large-scale processes in the solar nebula

    NASA Technical Reports Server (NTRS)

    Boss, A. P.

    1994-01-01

    Theoretical models of the structure of a minimum mass solar nebula should be able to provide the physical context to help evaluate the efficacy of any mechanism proposed for the formation of chondrules or Ca, Al-rich inclusions (CAI's). These models generally attempt to use the equations of radiative hydrodynamics to calculate the large-scale structure of the solar nebula throughout the planet-forming region. In addition, it has been suggested that chondrules and CAI's (=Ch&CAI's) may have been formed as a direct result of large-scale nebula processing such as passage of material through high-temperature regions associated with the global structure of the nebula. In this report we assess the status of global models of solar nebula structure and of various related mechanisms that have been suggested for Ch and CAI formation.

  20. Planar Doppler Velocimetry for Large-Scale Wind Tunnel Applications

    NASA Technical Reports Server (NTRS)

    McKenzie, Robert L.

    1998-01-01

    Planar Doppler Velocimetry (PDV) concepts using a pulsed laser are described and the obtainable minimum resolved velocities in large-scale wind tunnels are evaluated. Velocity-field measurements are shown to be possible at ranges of tens of meters and with single pulse resolutions as low as 2 m/s. Velocity measurements in the flow of a low-speed, turbulent jet are reported that demonstrate the ability of PDV to acquire both average velocity fields and their fluctuation amplitudes, using procedures that are compatible with large-scale facility operations. The advantages of PDV over current Laser Doppler Anemometry and Particle Image Velocimetry techniques appear to be significant for applications to large facilities.

  1. Transparent and Flexible Large-scale Graphene-based Heater

    NASA Astrophysics Data System (ADS)

    Kang, Junmo; Lee, Changgu; Kim, Young-Jin; Choi, Jae-Boong; Hong, Byung Hee

    2011-03-01

    We report the application of transparent and flexible heater with high optical transmittance and low sheet resistance using graphene films, showing outstanding thermal and electrical properties. The large-scale graphene films were grown on Cu foil by chemical vapor deposition methods, and transferred to transparent substrates by multiple stacking. The wet chemical doping process enhanced the electrical properties, showing a sheet resistance as low as 35 ohm/sq with 88.5 % transmittance. The temperature response usually depends on the dimension and the sheet resistance of the graphene-based heater. We show that a 4x4 cm2 heater can reach 80& circ; C within 40 seconds and large-scale (9x9 cm2) heater shows uniformly heating performance, which was measured using thermocouple and infra-red camera. These heaters would be very useful for defogging systems and smart windows.

  2. Large Scale Diffuse X-ray Emission from Abell 3571

    NASA Technical Reports Server (NTRS)

    Molnar, Sandor M.; White, Nicholas E. (Technical Monitor)

    2001-01-01

    Observations of the Luman alpha forest suggest that there are many more baryons at high redshift than we can find in the Universe nearby. The largest known concentration of baryons in the nearby Universe is the Shapley supercluster. We scanned the Shapley supercluster to search for large scale diffuse emission with the Rossi X-ray Timing Explorer (RXTE), and found some evidence for such emission. Large scale diffuse emission may be associated to the supercluster, or the clusters of galaxies within the supercluster. In this paper we present results of scans near Abell 3571. We found that the sum of a cooling flow and an isothermal beta model adequately describes the X-ray emission from the cluster. Our results suggest that diffuse emission from A3571 extends out to about two virial radii. We briefly discuss the importance of the determination of the cut off radius of the beta model.

  3. Large Scale Deformation of the Western US Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2001-01-01

    Destructive earthquakes occur throughout the western US Cordillera (WUSC), not just within the San Andreas fault zone. But because we do not understand the present-day large-scale deformations of the crust throughout the WUSC, our ability to assess the potential for seismic hazards in this region remains severely limited. To address this problem, we are using a large collection of Global Positioning System (GPS) networks which spans the WUSC to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work can roughly be divided into an analysis of the GPS observations to infer the deformation field across and within the entire plate boundary zone and an investigation of the implications of this deformation field regarding plate boundary dynamics.

  4. The Large Scale Synthesis of Aligned Plate Nanostructures

    PubMed Central

    Zhou, Yang; Nash, Philip; Liu, Tian; Zhao, Naiqin; Zhu, Shengli

    2016-01-01

    We propose a novel technique for the large-scale synthesis of aligned-plate nanostructures that are self-assembled and self-supporting. The synthesis technique involves developing nanoscale two-phase microstructures through discontinuous precipitation followed by selective etching to remove one of the phases. The method may be applied to any alloy system in which the discontinuous precipitation transformation goes to completion. The resulting structure may have many applications in catalysis, filtering and thermal management depending on the phase selection and added functionality through chemical reaction with the retained phase. The synthesis technique is demonstrated using the discontinuous precipitation of a γ′ phase, (Ni, Co)3Al, followed by selective dissolution of the γ matrix phase. The production of the nanostructure requires heat treatments on the order of minutes and can be performed on a large scale making this synthesis technique of great economic potential. PMID:27439672

  5. Large scale meteorological influence during the Geysers 1979 field experiment

    SciTech Connect

    Barr, S.

    1980-01-01

    A series of meteorological field measurements conducted during July 1979 near Cobb Mountain in Northern California reveals evidence of several scales of atmospheric circulation consistent with the climatic pattern of the area. The scales of influence are reflected in the structure of wind and temperature in vertically stratified layers at a given observation site. Large scale synoptic gradient flow dominates the wind field above about twice the height of the topographic ridge. Below that there is a mixture of effects with evidence of a diurnal sea breeze influence and a sublayer of katabatic winds. The July observations demonstrate that weak migratory circulations in the large scale synoptic meteorological pattern have a significant influence on the day-to-day gradient winds and must be accounted for in planning meteorological programs including tracer experiments.

  6. Large-scale quantum effects in biological systems

    NASA Astrophysics Data System (ADS)

    Mesquita, Marcus V.; Vasconcellos, Áurea R.; Luzzi, Roberto; Mascarenhas, Sergio

    Particular aspects of large-scale quantum effects in biological systems, such as biopolymers and also microtubules in the cytoskeleton of neurons which can have relevance in brain functioning, are discussed. The microscopic (quantum mechanical) and macroscopic (quantum statistical mechanical) aspects, and the emergence of complex behavior, are described. This phenomena consists of the large-scale coherent process of Fröhlich-Bose-Einstein condensation in open and sufficiently far-from-equilibrium biopolymers. Associated with this phenomenon is the presence of Schrödinger-Davydov solitons, which propagate, undistorted and undamped, when embedded in the Fröhlich-Bose-Einstein condensate, thus allowing for the transmission of signals at long distances, involving a question relevant to bioenergetics.

  7. GAIA: A WINDOW TO LARGE-SCALE MOTIONS

    SciTech Connect

    Nusser, Adi; Branchini, Enzo; Davis, Marc E-mail: branchin@fis.uniroma3.it

    2012-08-10

    Using redshifts as a proxy for galaxy distances, estimates of the two-dimensional (2D) transverse peculiar velocities of distant galaxies could be obtained from future measurements of proper motions. We provide the mathematical framework for analyzing 2D transverse motions and show that they offer several advantages over traditional probes of large-scale motions. They are completely independent of any intrinsic relations between galaxy properties; hence, they are essentially free of selection biases. They are free from homogeneous and inhomogeneous Malmquist biases that typically plague distance indicator catalogs. They provide additional information to traditional probes that yield line-of-sight peculiar velocities only. Further, because of their 2D nature, fundamental questions regarding vorticity of large-scale flows can be addressed. Gaia, for example, is expected to provide proper motions of at least bright galaxies with high central surface brightness, making proper motions a likely contender for traditional probes based on current and future distance indicator measurements.

  8. Large-scale objective phenotyping of 3D facial morphology

    PubMed Central

    Hammond, Peter; Suttie, Michael

    2012-01-01

    Abnormal phenotypes have played significant roles in the discovery of gene function, but organized collection of phenotype data has been overshadowed by developments in sequencing technology. In order to study phenotypes systematically, large-scale projects with standardized objective assessment across populations are considered necessary. The report of the 2006 Human Variome Project meeting recommended documentation of phenotypes through electronic means by collaborative groups of computational scientists and clinicians using standard, structured descriptions of disease-specific phenotypes. In this report, we describe progress over the past decade in 3D digital imaging and shape analysis of the face, and future prospects for large-scale facial phenotyping. Illustrative examples are given throughout using a collection of 1107 3D face images of healthy controls and individuals with a range of genetic conditions involving facial dysmorphism. PMID:22434506

  9. Electron drift in a large scale solid xenon

    SciTech Connect

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor two faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.

  10. Electron drift in a large scale solid xenon

    DOE PAGESBeta

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor twomore » faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.« less

  11. Large-scale micropropagation system of plant cells.

    PubMed

    Honda, Hiroyuki; Kobayashi, Takeshi

    2004-01-01

    Plant micropropagation is an efficient method of propagating disease-free, genetically uniform and massive amounts of plants in vitro. The scale-up of the whole process for plant micropropagation should be established by an economically feasible technology for large-scale production of them in appropriate bioreactors. It is necessary to design suitable bioreactor configuration which can provide adequate mixing and mass transfer while minimizing the intensity of shear stress and hydrodynamic pressure. Automatic selection of embryogenic calli and regenerated plantlets using image analysis system should be associated with the system. The aim of this chapter is to identify the problems related to large-scale plant micropropagation via somatic embryogenesis, and to summarize the micropropagation technology and computer-aided image analysis. Viscous additive supplemented culture, which is including the successful results obtained by us for callus regeneration, is also introduced. PMID:15453194

  12. Individual skill differences and large-scale environmental learning.

    PubMed

    Fields, Alexa W; Shelton, Amy L

    2006-05-01

    Spatial skills are known to vary widely among normal individuals. This project was designed to address whether these individual differences are differentially related to large-scale environmental learning from route (ground-level) and survey (aerial) perspectives. Participants learned two virtual environments (route and survey) with limited exposure and tested on judgments about relative locations of objects. They also performed a series of spatial and nonspatial component skill tests. With limited learning, performance after route encoding was worse than performance after survey encoding. Furthermore, performance after route and survey encoding appeared to be preferentially linked to perspective and object-based transformations, respectively. Together, the results provide clues to how different skills might be engaged by different individuals for the same goal of learning a large-scale environment. PMID:16719662

  13. Quantum Noise in Large-Scale Coherent Nonlinear Photonic Circuits

    NASA Astrophysics Data System (ADS)

    Santori, Charles; Pelc, Jason S.; Beausoleil, Raymond G.; Tezak, Nikolas; Hamerly, Ryan; Mabuchi, Hideo

    2014-06-01

    A semiclassical simulation approach is presented for studying quantum noise in large-scale photonic circuits incorporating an ideal Kerr nonlinearity. A circuit solver is used to generate matrices defining a set of stochastic differential equations, in which the resonator field variables represent random samplings of the Wigner quasiprobability distributions. Although the semiclassical approach involves making a large-photon-number approximation, tests on one- and two-resonator circuits indicate satisfactory agreement between the semiclassical and full-quantum simulation results in the parameter regime of interest. The semiclassical model is used to simulate random errors in a large-scale circuit that contains 88 resonators and hundreds of components in total and functions as a four-bit ripple counter. The error rate as a function of on-state photon number is examined, and it is observed that the quantum fluctuation amplitudes do not increase as signals propagate through the circuit, an important property for scalability.

  14. Large-scale flow generation in turbulent convection

    PubMed Central

    Krishnamurti, Ruby; Howard, Louis N.

    1981-01-01

    In a horizontal layer of fluid heated from below and cooled from above, cellular convection with horizontal length scale comparable to the layer depth occurs for small enough values of the Rayleigh number. As the Rayleigh number is increased, cellular flow disappears and is replaced by a random array of transient plumes. Upon further increase, these plumes drift in one direction near the bottom and in the opposite direction near the top of the layer with the axes of plumes tilted in such a way that horizontal momentum is transported upward via the Reynolds stress. With the onset of this large-scale flow, the largest scale of motion has increased from that comparable to the layer depth to a scale comparable to the layer width. The conditions for occurrence and determination of the direction of this large-scale circulation are described. Images PMID:16592996

  15. The Large Scale Synthesis of Aligned Plate Nanostructures

    NASA Astrophysics Data System (ADS)

    Zhou, Yang; Nash, Philip; Liu, Tian; Zhao, Naiqin; Zhu, Shengli

    2016-07-01

    We propose a novel technique for the large-scale synthesis of aligned-plate nanostructures that are self-assembled and self-supporting. The synthesis technique involves developing nanoscale two-phase microstructures through discontinuous precipitation followed by selective etching to remove one of the phases. The method may be applied to any alloy system in which the discontinuous precipitation transformation goes to completion. The resulting structure may have many applications in catalysis, filtering and thermal management depending on the phase selection and added functionality through chemical reaction with the retained phase. The synthesis technique is demonstrated using the discontinuous precipitation of a γ‧ phase, (Ni, Co)3Al, followed by selective dissolution of the γ matrix phase. The production of the nanostructure requires heat treatments on the order of minutes and can be performed on a large scale making this synthesis technique of great economic potential.

  16. Large-scale behavior and statistical equilibria in rotating flows

    NASA Astrophysics Data System (ADS)

    Mininni, P. D.; Dmitruk, P.; Matthaeus, W. H.; Pouquet, A.

    2011-01-01

    We examine long-time properties of the ideal dynamics of three-dimensional flows, in the presence or not of an imposed solid-body rotation and with or without helicity (velocity-vorticity correlation). In all cases, the results agree with the isotropic predictions stemming from statistical mechanics. No accumulation of excitation occurs in the large scales, although, in the dissipative rotating case, anisotropy and accumulation, in the form of an inverse cascade of energy, are known to occur. We attribute this latter discrepancy to the linearity of the term responsible for the emergence of inertial waves. At intermediate times, inertial energy spectra emerge that differ somewhat from classical wave-turbulence expectations and with a trace of large-scale excitation that goes away for long times. These results are discussed in the context of partial two dimensionalization of the flow undergoing strong rotation as advocated by several authors.

  17. The workshop on iterative methods for large scale nonlinear problems

    SciTech Connect

    Walker, H.F.; Pernice, M.

    1995-12-01

    The aim of the workshop was to bring together researchers working on large scale applications with numerical specialists of various kinds. Applications that were addressed included reactive flows (combustion and other chemically reacting flows, tokamak modeling), porous media flows, cardiac modeling, chemical vapor deposition, image restoration, macromolecular modeling, and population dynamics. Numerical areas included Newton iterative (truncated Newton) methods, Krylov subspace methods, domain decomposition and other preconditioning methods, large scale optimization and optimal control, and parallel implementations and software. This report offers a brief summary of workshop activities and information about the participants. Interested readers are encouraged to look into an online proceedings available at http://www.usi.utah.edu/logan.proceedings. In this, the material offered here is augmented with hypertext abstracts that include links to locations such as speakers` home pages, PostScript copies of talks and papers, cross-references to related talks, and other information about topics addresses at the workshop.

  18. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    SciTech Connect

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  19. Analysis plan for 1985 large-scale tests. Technical report

    SciTech Connect

    McMullan, F.W.

    1983-01-01

    The purpose of this effort is to assist DNA in planning for large-scale (upwards of 5000 tons) detonations of conventional explosives in the 1985 and beyond time frame. Primary research objectives were to investigate potential means to increase blast duration and peak pressures. This report identifies and analyzes several candidate explosives. It examines several charge designs and identifies advantages and disadvantages of each. Other factors including terrain and multiburst techniques are addressed as are test site considerations.

  20. Large-scale Alfvén vortices

    NASA Astrophysics Data System (ADS)

    Onishchenko, O. G.; Pokhotelov, O. A.; Horton, W.; Scullion, E.; Fedun, V.

    2015-12-01

    The new type of large-scale vortex structures of dispersionless Alfvén waves in collisionless plasma is investigated. It is shown that Alfvén waves can propagate in the form of Alfvén vortices of finite characteristic radius and characterised by magnetic flux ropes carrying orbital angular momentum. The structure of the toroidal and radial velocity, fluid and magnetic field vorticity, the longitudinal electric current in the plane orthogonal to the external magnetic field are discussed.

  1. Large-Scale Weather Disturbances in Mars’ Southern Extratropics

    NASA Astrophysics Data System (ADS)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2015-11-01

    Between late autumn and early spring, Mars’ middle and high latitudes within its atmosphere support strong mean thermal gradients between the tropics and poles. Observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) indicate that this strong baroclinicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). These extratropical weather disturbances are key components of the global circulation. Such wave-like disturbances act as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of large-scale, traveling extratropical synoptic-period disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively lifted and radiatively active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to their northern-hemisphere counterparts, southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are examined. Simulations that adapt Mars’ full topography compared to simulations that utilize synthetic topographies emulating key large-scale features of the southern middle latitudes indicate that Mars’ transient barotropic/baroclinic eddies are highly influenced by the great impact basins of this hemisphere (e.g., Argyre and Hellas). The occurrence of a southern storm zone in late winter and early spring appears to be anchored to the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre

  2. The Phoenix series large scale LNG pool fire experiments.

    SciTech Connect

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  3. Cosmic string and formation of large scale structure.

    NASA Astrophysics Data System (ADS)

    Fang, L.-Z.; Xiang, S.-P.

    Cosmic string formed due to phase transition in the early universe may be the cause of galaxy formation and clustering. The advantage of string model is that it can give a consistent explanation of all observed results related to large scale structure, such as correlation functions of galaxies, clusters and superclusters, the existence of voids and/or bubbles, anisotropy of cosmic background radiation. A systematic review on string model has been done.

  4. Large-scale Alfvén vortices

    SciTech Connect

    Onishchenko, O. G.; Horton, W.; Scullion, E.; Fedun, V.

    2015-12-15

    The new type of large-scale vortex structures of dispersionless Alfvén waves in collisionless plasma is investigated. It is shown that Alfvén waves can propagate in the form of Alfvén vortices of finite characteristic radius and characterised by magnetic flux ropes carrying orbital angular momentum. The structure of the toroidal and radial velocity, fluid and magnetic field vorticity, the longitudinal electric current in the plane orthogonal to the external magnetic field are discussed.

  5. Climate: large-scale warming is not urban.

    PubMed

    Parker, David E

    2004-11-18

    Controversy has persisted over the influence of urban warming on reported large-scale surface-air temperature trends. Urban heat islands occur mainly at night and are reduced in windy conditions. Here we show that, globally, temperatures over land have risen as much on windy nights as on calm nights, indicating that the observed overall warming is not a consequence of urban development. PMID:15549087

  6. Relic vector field and CMB large scale anomalies

    SciTech Connect

    Chen, Xingang; Wang, Yi E-mail: yw366@cam.ac.uk

    2014-10-01

    We study the most general effects of relic vector fields on the inflationary background and density perturbations. Such effects are observable if the number of inflationary e-folds is close to the minimum requirement to solve the horizon problem. We show that this can potentially explain two CMB large scale anomalies: the quadrupole-octopole alignment and the quadrupole power suppression. We discuss its effect on the parity anomaly. We also provide analytical template for more detailed data comparison.

  7. Turbulent large-scale structure effects on wake meandering

    NASA Astrophysics Data System (ADS)

    Muller, Y.-A.; Masson, C.; Aubrun, S.

    2015-06-01

    This work studies effects of large-scale turbulent structures on wake meandering using Large Eddy Simulations (LES) over an actuator disk. Other potential source of wake meandering such as the instablility mechanisms associated with tip vortices are not treated in this study. A crucial element of the efficient, pragmatic and successful simulations of large-scale turbulent structures in Atmospheric Boundary Layer (ABL) is the generation of the stochastic turbulent atmospheric flow. This is an essential capability since one source of wake meandering is these large - larger than the turbine diameter - turbulent structures. The unsteady wind turbine wake in ABL is simulated using a combination of LES and actuator disk approaches. In order to dedicate the large majority of the available computing power in the wake, the ABL ground region of the flow is not part of the computational domain. Instead, mixed Dirichlet/Neumann boundary conditions are applied at all the computational surfaces except at the outlet. Prescribed values for Dirichlet contribution of these boundary conditions are provided by a stochastic turbulent wind generator. This allows to simulate large-scale turbulent structures - larger than the computational domain - leading to an efficient simulation technique of wake meandering. Since the stochastic wind generator includes shear, the turbulence production is included in the analysis without the necessity of resolving the flow near the ground. The classical Smagorinsky sub-grid model is used. The resulting numerical methodology has been implemented in OpenFOAM. Comparisons with experimental measurements in porous-disk wakes have been undertaken, and the agreements are good. While temporal resolution in experimental measurements is high, the spatial resolution is often too low. LES numerical results provide a more complete spatial description of the flow. They tend to demonstrate that inflow low frequency content - or large- scale turbulent structures - is

  8. Space transportation booster engine thrust chamber technology, large scale injector

    NASA Technical Reports Server (NTRS)

    Schneider, J. A.

    1993-01-01

    The objective of the Large Scale Injector (LSI) program was to deliver a 21 inch diameter, 600,000 lbf thrust class injector to NASA/MSFC for hot fire testing. The hot fire test program would demonstrate the feasibility and integrity of the full scale injector, including combustion stability, chamber wall compatibility (thermal management), and injector performance. The 21 inch diameter injector was delivered in September of 1991.

  9. Supporting large scale applications on networks of workstations

    NASA Technical Reports Server (NTRS)

    Cooper, Robert; Birman, Kenneth P.

    1989-01-01

    Distributed applications on networks of workstations are an increasingly common way to satisfy computing needs. However, existing mechanisms for distributed programming exhibit poor performance and reliability as application size increases. Extension of the ISIS distributed programming system to support large scale distributed applications by providing hierarchical process groups is discussed. Incorporation of hierarchy in the program structure and exploitation of this to limit the communication and storage required in any one component of the distributed system is examined.

  10. Technology-design-manufacturing co-optimization for advanced mobile SoCs

    NASA Astrophysics Data System (ADS)

    Yang, Da; Gan, Chock; Chidambaram, P. R.; Nallapadi, Giri; Zhu, John; Song, S. C.; Xu, Jeff; Yeap, Geoffrey

    2014-03-01

    How to maintain the Moore's Law scaling beyond the 193 immersion resolution limit is the key question semiconductor industry needs to answer in the near future. Process complexity will undoubtfully increase for 14nm node and beyond, which brings both challenges and opportunities for technology development. A vertically integrated design-technologymanufacturing co-optimization flow is desired to better address the complicated issues new process changes bring. In recent years smart mobile wireless devices have been the fastest growing consumer electronics market. Advanced mobile devices such as smartphones are complex systems with the overriding objective of providing the best userexperience value by harnessing all the technology innovations. Most critical system drivers are better system performance/power efficiency, cost effectiveness, and smaller form factors, which, in turns, drive the need of system design and solution with More-than-Moore innovations. Mobile system-on-chips (SoCs) has become the leading driver for semiconductor technology definition and manufacturing. Here we highlight how the co-optimization strategy influenced architecture, device/circuit, process technology and package, in the face of growing process cost/complexity and variability as well as design rule restrictions.

  11. Software development for the evaluation of the ergonomic compatibility on the selection of advanced manufacturing technology.

    PubMed

    Maldonado-Macías, A; Reyes, R; Guillen, L; García, J

    2012-01-01

    Advanced Manufacturing Technology (AMT) is one of the most relevant resources that companies have to achieve competitiveness and best performance. The selection of AMT is a complex problem which involves significant amount of information and uncertainty when multiple aspects must be taken into consideration. Actual models for the selection of AMT are found scarce of the Human Factors and Ergonomics perspective which can lead to a more complete and reliable decision. This paper presents the development of software that enhances the application of an Ergonomic Compatibility Evaluation Model that supports decision making processes taking into consideration ergonomic attributes of designs. Ergonomic Compatibility is a construct used in this model and it is mainly based in the concept of human-artifact compatibility on human compatible systems. Also, an Axiomatic Design approach by the use of the Information Axiom was evolved under a fuzzy environment to obtain the Ergonomic Incompatibility Content. The extension of this axiom for the evaluation of ergonomic compatibility requirements was the theoretical framework of this research. An incremental methodology of four stages was used to design and develop the software that enables to compare AMT alternatives by the evaluation of Ergonomic Compatibility Attributes. PMID:22316972

  12. Homogenization of Large-Scale Movement Models in Ecology

    USGS Publications Warehouse

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  13. Dispersal Mutualism Incorporated into Large-Scale, Infrequent Disturbances

    PubMed Central

    Parker, V. Thomas

    2015-01-01

    Because of their influence on succession and other community interactions, large-scale, infrequent natural disturbances also should play a major role in mutualistic interactions. Using field data and experiments, I test whether mutualisms have been incorporated into large-scale wildfire by whether the outcomes of a mutualism depend on disturbance. In this study a seed dispersal mutualism is shown to depend on infrequent, large-scale disturbances. A dominant shrubland plant (Arctostaphylos species) produces seeds that make up a persistent soil seed bank and requires fire to germinate. In post-fire stands, I show that seedlings emerging from rodent caches dominate sites experiencing higher fire intensity. Field experiments show that rodents (Perimyscus californicus, P. boylii) do cache Arctostaphylos fruit and bury most seed caches to a sufficient depth to survive a killing heat pulse that a fire might drive into the soil. While the rodent dispersal and caching behavior itself has not changed compared to other habitats, the environmental transformation caused by wildfire converts the caching burial of seed from a dispersal process to a plant fire adaptive trait, and provides the context for stimulating subsequent life history evolution in the plant host. PMID:26151560

  14. Reliability assessment for components of large scale photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Ahadi, Amir; Ghadimi, Noradin; Mirabbasi, Davar

    2014-10-01

    Photovoltaic (PV) systems have significantly shifted from independent power generation systems to a large-scale grid-connected generation systems in recent years. The power output of PV systems is affected by the reliability of various components in the system. This study proposes an analytical approach to evaluate the reliability of large-scale, grid-connected PV systems. The fault tree method with an exponential probability distribution function is used to analyze the components of large-scale PV systems. The system is considered in the various sequential and parallel fault combinations in order to find all realistic ways in which the top or undesired events can occur. Additionally, it can identify areas that the planned maintenance should focus on. By monitoring the critical components of a PV system, it is possible not only to improve the reliability of the system, but also to optimize the maintenance costs. The latter is achieved by informing the operators about the system component's status. This approach can be used to ensure secure operation of the system by its flexibility in monitoring system applications. The implementation demonstrates that the proposed method is effective and efficient and can conveniently incorporate more system maintenance plans and diagnostic strategies.

  15. Robust regression for large-scale neuroimaging studies.

    PubMed

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. PMID:25731989

  16. Large-scale flow experiments for managing river systems

    USGS Publications Warehouse

    Konrad, C.P.; Olden, J.D.; Lytle, D.A.; Melis, T.S.; Schmidt, J.C.; Bray, E.N.; Freeman, Mary C.; Gido, K.B.; Hemphill, N.P.; Kennard, M.J.; McMullen, L.E.; Mims, M.C.; Pyron, M.; Robinson, C.T.; Williams, J.G.

    2011-01-01

    Experimental manipulations of streamflow have been used globally in recent decades to mitigate the impacts of dam operations on river systems. Rivers are challenging subjects for experimentation, because they are open systems that cannot be isolated from their social context. We identify principles to address the challenges of conducting effective large-scale flow experiments. Flow experiments have both scientific and social value when they help to resolve specific questions about the ecological action of flow with a clear nexus to water policies and decisions. Water managers must integrate new information into operating policies for large-scale experiments to be effective. Modeling and monitoring can be integrated with experiments to analyze long-term ecological responses. Experimental design should include spatially extensive observations and well-defined, repeated treatments. Large-scale flow manipulations are only a part of dam operations that affect river systems. Scientists can ensure that experimental manipulations continue to be a valuable approach for the scientifically based management of river systems. ?? 2011 by American Institute of Biological Sciences. All rights reserved.

  17. Large-scale flow experiments for managing river systems

    USGS Publications Warehouse

    Konrad, Christopher P.; Olden, Julian D.; Lytle, David A.; Melis, Theodore S.; Schmidt, John C.; Bray, Erin N.; Freeman, Mary C.; Gido, Keith B.; Hemphill, Nina P.; Kennard, Mark J.; McMullen, Laura E.; Mims, Meryl C.; Pyron, Mark; Robinson, Christopher T.; Williams, John G.

    2011-01-01

    Experimental manipulations of streamflow have been used globally in recent decades to mitigate the impacts of dam operations on river systems. Rivers are challenging subjects for experimentation, because they are open systems that cannot be isolated from their social context. We identify principles to address the challenges of conducting effective large-scale flow experiments. Flow experiments have both scientific and social value when they help to resolve specific questions about the ecological action of flow with a clear nexus to water policies and decisions. Water managers must integrate new information into operating policies for large-scale experiments to be effective. Modeling and monitoring can be integrated with experiments to analyze long-term ecological responses. Experimental design should include spatially extensive observations and well-defined, repeated treatments. Large-scale flow manipulations are only a part of dam operations that affect river systems. Scientists can ensure that experimental manipulations continue to be a valuable approach for the scientifically based management of river systems.

  18. Large-scale flow generation by inhomogeneous helicity

    NASA Astrophysics Data System (ADS)

    Yokoi, N.; Brandenburg, A.

    2016-03-01

    The effect of kinetic helicity (velocity-vorticity correlation) on turbulent momentum transport is investigated. The turbulent kinetic helicity (pseudoscalar) enters the Reynolds stress (mirror-symmetric tensor) expression in the form of a helicity gradient as the coupling coefficient for the mean vorticity and/or the angular velocity (axial vector), which suggests the possibility of mean-flow generation in the presence of inhomogeneous helicity. This inhomogeneous helicity effect, which was previously confirmed at the level of a turbulence- or closure-model simulation, is examined with the aid of direct numerical simulations of rotating turbulence with nonuniform helicity sustained by an external forcing. The numerical simulations show that the spatial distribution of the Reynolds stress is in agreement with the helicity-related term coupled with the angular velocity, and that a large-scale flow is generated in the direction of angular velocity. Such a large-scale flow is not induced in the case of homogeneous turbulent helicity. This result confirms the validity of the inhomogeneous helicity effect in large-scale flow generation and suggests that a vortex dynamo is possible even in incompressible turbulence where there is no baroclinicity effect.

  19. Dispersal Mutualism Incorporated into Large-Scale, Infrequent Disturbances.

    PubMed

    Parker, V Thomas

    2015-01-01

    Because of their influence on succession and other community interactions, large-scale, infrequent natural disturbances also should play a major role in mutualistic interactions. Using field data and experiments, I test whether mutualisms have been incorporated into large-scale wildfire by whether the outcomes of a mutualism depend on disturbance. In this study a seed dispersal mutualism is shown to depend on infrequent, large-scale disturbances. A dominant shrubland plant (Arctostaphylos species) produces seeds that make up a persistent soil seed bank and requires fire to germinate. In post-fire stands, I show that seedlings emerging from rodent caches dominate sites experiencing higher fire intensity. Field experiments show that rodents (Perimyscus californicus, P. boylii) do cache Arctostaphylos fruit and bury most seed caches to a sufficient depth to survive a killing heat pulse that a fire might drive into the soil. While the rodent dispersal and caching behavior itself has not changed compared to other habitats, the environmental transformation caused by wildfire converts the caching burial of seed from a dispersal process to a plant fire adaptive trait, and provides the context for stimulating subsequent life history evolution in the plant host. PMID:26151560

  20. Large-scale quantification of CVD graphene surface coverage.

    PubMed

    Ambrosi, Adriano; Bonanni, Alessandra; Sofer, Zdeněk; Pumera, Martin

    2013-03-21

    The extraordinary properties demonstrated for graphene and graphene-related materials can be fully exploited when a large-scale fabrication procedure is made available. Chemical vapor deposition (CVD) of graphene on Cu and Ni substrates is one of the most promising procedures to synthesize large-area and good quality graphene films. Parallel to the fabrication process, a large-scale quality monitoring technique is equally crucial. We demonstrate here a rapid and simple methodology that is able to probe the effectiveness of the growth process over a large substrate area for both Ni and Cu substrates. This method is based on inherent electrochemical signals generated by the underlying metal catalysts when fractures or discontinuities of the graphene film are present. The method can be applied immediately after the CVD growth process without the need for any graphene transfer step and represents a powerful quality monitoring technique for the assessment of large-scale fabrication of graphene by the CVD process. PMID:23396554

  1. Photorealistic large-scale urban city model reconstruction.

    PubMed

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite). PMID:19423889

  2. Large scale structure in universes dominated by cold dark matter

    NASA Technical Reports Server (NTRS)

    Bond, J. Richard

    1986-01-01

    The theory of Gaussian random density field peaks is applied to a numerical study of the large-scale structure developing from adiabatic fluctuations in models of biased galaxy formation in universes with Omega = 1, h = 0.5 dominated by cold dark matter (CDM). The angular anisotropy of the cross-correlation function demonstrates that the far-field regions of cluster-scale peaks are asymmetric, as recent observations indicate. These regions will generate pancakes or filaments upon collapse. One-dimensional singularities in the large-scale bulk flow should arise in these CDM models, appearing as pancakes in position space. They are too rare to explain the CfA bubble walls, but pancakes that are just turning around now are sufficiently abundant and would appear to be thin walls normal to the line of sight in redshift space. Large scale streaming velocities are significantly smaller than recent observations indicate. To explain the reported 700 km/s coherent motions, mass must be significantly more clustered than galaxies with a biasing factor of less than 0.4 and a nonlinear redshift at cluster scales greater than one for both massive neutrino and cold models.

  3. Line segment extraction for large scale unorganized point clouds

    NASA Astrophysics Data System (ADS)

    Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan

    2015-04-01

    Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.

  4. Geospatial Optimization of Siting Large-Scale Solar Projects

    SciTech Connect

    Macknick, J.; Quinby, T.; Caulfield, E.; Gerritsen, M.; Diffendorfer, J.; Haines, S.

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  5. New Large-scale Control Strategies for Turbulent Boundary Layers

    NASA Astrophysics Data System (ADS)

    Schoppa, Wade; Hussain, Fazle

    1997-11-01

    Using direct numerical simulations of turbulent channel flow, we present robust strategies for drag reduction by prevention of streamwise vortex formation near the wall. Instability of lifted, vortex-free low-speed streaks is shown to generate new streamwise vortices, which dominate near-wall turbulence phenomena. The newly-found instability mechanism initiates streak waviness in the (x,z) plane which leads to ωx sheets. Streak waviness induces positive partial u/partial x (i.e. positive VISA) which causes these sheets to then collapse via stretching (rather than roll up) into streamwise vortices. Significantly, the 3D features of the (instantaneous) instability-generated vortices agree well with the coherent structures educed (i.e. ensemble-averaged) from fully turbulent flow, suggesting the prevalence of this instability mechanism. The new control via large-scale streak manipulation exploits this crucial role of streak instability in vortex generation. An x-independent forcing with a z wavelength of 4 streak spacings, with an amplitude of only 5% of the centerline velocity, produces a significant sustained drag reduction: 20% for imposed counterrotating large-scale swirls and 50% for colliding spanwise wall jet-like forcing. These results suggest promising drag reduction strategies, involving large-scale (hence more durable) actuation and requiring no wall sensors or feedback logic.

  6. Lateral stirring of large-scale tracer fields by altimetry

    NASA Astrophysics Data System (ADS)

    Dencausse, Guillaume; Morrow, Rosemary; Rogé, Marine; Fleury, Sara

    2014-01-01

    Ocean surface fronts and filaments have a strong impact on the global ocean circulation and biogeochemistry. Surface Lagrangian advection with time-evolving altimetric geostrophic velocities can be used to simulate the submesoscale front and filament structures in large-scale tracer fields. We study this technique in the Southern Ocean region south of Tasmania, a domain marked by strong meso- to submesoscale features such as the fronts of the Antarctic Circumpolar Current (ACC). Starting with large-scale surface tracer fields that we stir with altimetric velocities, we determine `advected' fields which compare well with high-resolution in situ or satellite tracer data. We find that fine scales are best represented in a statistical sense after an optimal advection time of ˜2 weeks, with enhanced signatures of the ACC fronts and better spectral energy. The technique works best in moderate to high EKE regions where lateral advection dominates. This technique may be used to infer the distribution of unresolved small scales in any physical or biogeochemical surface tracer that is dominated by lateral advection. Submesoscale dynamics also impact the subsurface of the ocean, and the Lagrangian advection at depth shows promising results. Finally, we show that climatological tracer fields computed from the advected large-scale fields display improved fine-scale mean features, such as the ACC fronts, which can be useful in the context of ocean modelling.

  7. Large scale anisotropy of UHECRs for the Telescope Array

    SciTech Connect

    Kido, E.

    2011-09-22

    The origin of Ultra High Energy Cosmic Rays (UHECRs) is one of the most interesting questions in astroparticle physics. Despite of the efforts by other previous measurements, there is no consensus of both of the origin and the mechanism of UHECRs generation and propagation yet. In this context, Telescope Array (TA) experiment is expected to play an important role as the largest detector in the northern hemisphere which consists of an array of surface particle detectors (SDs) and fluorescence detectors (FDs) and other important calibration devices. We searched for large scale anisotropy using SD data of TA. UHECRs are expected to be restricted in GZK horizon when the composition of UHECRs is proton, so the observed arrival directions are expected to exhibit local large scale anisotropy if UHECR sources are some astrophysical objects. We used the SD data set from 11 May 2008 to 7 September 2010 to search for large-scale anisotropy. The discrimination power between LSS and isotropy is not enough yet, but the statistics in TA is expected to discriminate between those in about 95% confidence level on average in near future.

  8. Large-scale functional connectivity networks in the rodent brain.

    PubMed

    Gozzi, Alessandro; Schwarz, Adam J

    2016-02-15

    Resting-state functional Magnetic Resonance Imaging (rsfMRI) of the human brain has revealed multiple large-scale neural networks within a hierarchical and complex structure of coordinated functional activity. These distributed neuroanatomical systems provide a sensitive window on brain function and its disruption in a variety of neuropathological conditions. The study of macroscale intrinsic connectivity networks in preclinical species, where genetic and environmental conditions can be controlled and manipulated with high specificity, offers the opportunity to elucidate the biological determinants of these alterations. While rsfMRI methods are now widely used in human connectivity research, these approaches have only relatively recently been back-translated into laboratory animals. Here we review recent progress in the study of functional connectivity in rodent species, emphasising the ability of this approach to resolve large-scale brain networks that recapitulate neuroanatomical features of known functional systems in the human brain. These include, but are not limited to, a distributed set of regions identified in rats and mice that may represent a putative evolutionary precursor of the human default mode network (DMN). The impact and control of potential experimental and methodological confounds are also critically discussed. Finally, we highlight the enormous potential and some initial application of connectivity mapping in transgenic models as a tool to investigate the neuropathological underpinnings of the large-scale connectional alterations associated with human neuropsychiatric and neurological conditions. We conclude by discussing the translational potential of these methods in basic and applied neuroscience. PMID:26706448

  9. Multiresolution comparison of precipitation datasets for large-scale models

    NASA Astrophysics Data System (ADS)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  10. Upscaling of elastic properties for large scale geomechanical simulations

    NASA Astrophysics Data System (ADS)

    Chalon, F.; Mainguy, M.; Longuemare, P.; Lemonnier, P.

    2004-09-01

    Large scale geomechanical simulations are being increasingly used to model the compaction of stress dependent reservoirs, predict the long term integrity of under-ground radioactive waste disposals, and analyse the viability of hot-dry rock geothermal sites. These large scale simulations require the definition of homogenous mechanical properties for each geomechanical cell whereas the rock properties are expected to vary at a smaller scale. Therefore, this paper proposes a new methodology that makes possible to define the equivalent mechanical properties of the geomechanical cells using the fine scale information given in the geological model. This methodology is implemented on a synthetic reservoir case and two upscaling procedures providing the effective elastic properties of the Hooke's law are tested. The first upscaling procedure is an analytical method for perfectly stratified rock mass, whereas the second procedure computes lower and upper bounds of the equivalent properties with no assumption on the small scale heterogeneity distribution. Both procedures are applied to one geomechanical cell extracted from the reservoir structure. The results show that the analytical and numerical upscaling procedures provide accurate estimations of the effective parameters. Furthermore, a large scale simulation using the homogenized properties of each geomechanical cell calculated with the analytical method demonstrates that the overall behaviour of the reservoir structure is well reproduced for two different loading cases. Copyright

  11. Robust large-scale parallel nonlinear solvers for simulations.

    SciTech Connect

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write

  12. Large-scale magnetic topologies of early M dwarfs

    NASA Astrophysics Data System (ADS)

    Donati, J.-F.; Morin, J.; Petit, P.; Delfosse, X.; Forveille, T.; Aurière, M.; Cabanac, R.; Dintrans, B.; Fares, R.; Gastine, T.; Jardine, M. M.; Lignières, F.; Paletou, F.; Ramirez Velez, J. C.; Théado, S.

    2008-10-01

    We present here additional results of a spectropolarimetric survey of a small sample of stars ranging from spectral type M0 to M8 aimed at investigating observationally how dynamo processes operate in stars on both sides of the full convection threshold (spectral type M4). The present paper focuses on early M stars (M0-M3), that is above the full convection threshold. Applying tomographic imaging techniques to time series of rotationally modulated circularly polarized profiles collected with the NARVAL spectropolarimeter, we determine the rotation period and reconstruct the large-scale magnetic topologies of six early M dwarfs. We find that early-M stars preferentially host large-scale fields with dominantly toroidal and non-axisymmetric poloidal configurations, along with significant differential rotation (and long-term variability); only the lowest-mass star of our subsample is found to host an almost fully poloidal, mainly axisymmetric large-scale field resembling those found in mid-M dwarfs. This abrupt change in the large-scale magnetic topologies of M dwarfs (occurring at spectral type M3) has no related signature on X-ray luminosities (measuring the total amount of magnetic flux); it thus suggests that underlying dynamo processes become more efficient at producing large-scale fields (despite producing the same flux) at spectral types later than M3. We suspect that this change relates to the rapid decrease in the radiative cores of low-mass stars and to the simultaneous sharp increase of the convective turnover times (with decreasing stellar mass) that models predict to occur at M3; it may also be (at least partly) responsible for the reduced magnetic braking reported for fully convective stars. Based on observations obtained at the Télescope Bernard Lyot (TBL), operated by the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique of France. E-mail: donati@ast.obs-mip.fr (J-FD); jmorin@ast.obs-mip.fr (JM); petit

  13. Evaluating the use of HILIC in large-scale, multi dimensional proteomics: Horses for courses?

    PubMed Central

    Bensaddek, Dalila; Nicolas, Armel; Lamond, Angus I.

    2015-01-01

    Despite many recent advances in instrumentation, the sheer complexity of biological samples remains a major challenge in large-scale proteomics experiments, reflecting both the large number of protein isoforms and the wide dynamic range of their expression levels. However, while the dynamic range of expression levels for different components of the proteome is estimated to be ∼107–8, the equivalent dynamic range of LC–MS is currently limited to ∼106. Sample pre-fractionation has therefore become routinely used in large-scale proteomics to reduce sample complexity during MS analysis and thus alleviate the problem of ion suppression and undersampling. There is currently a wide range of chromatographic techniques that can be applied as a first dimension separation. Here, we systematically evaluated the use of hydrophilic interaction liquid chromatography (HILIC), in comparison with hSAX, as a first dimension for peptide fractionation in a bottom-up proteomics workflow. The data indicate that in addition to its role as a useful pre-enrichment method for PTM analysis, HILIC can provide a robust, orthogonal and high-resolution method for increasing the depth of proteome coverage in large-scale proteomics experiments. The data also indicate that the choice of using either HILIC, hSAX, or other methods, is best made taking into account the specific types of biological analyses being performed. PMID:26869852

  14. Multiple sequence alignment: a major challenge to large-scale phylogenetics

    PubMed Central

    Liu, Kevin; Linder, C. Randal; Warnow, Tandy

    2011-01-01

    Over the last decade, dramatic advances have been made in developing methods for large-scale phylogeny estimation, so that it is now feasible for investigators with moderate computational resources to obtain reasonable solutions to maximum likelihood and maximum parsimony, even for datasets with a few thousand sequences. There has also been progress on developing methods for multiple sequence alignment, so that greater alignment accuracy (and subsequent improvement in phylogenetic accuracy) is now possible through automated methods. However, these methods have not been tested under conditions that reflect properties of datasets confronted by large-scale phylogenetic estimation projects. In this paper we report on a study that compares several alignment methods on a benchmark collection of nucleotide sequence datasets of up to 78,132 sequences. We show that as the number of sequences increases, the number of alignment methods that can analyze the datasets decreases. Furthermore, the most accurate alignment methods are unable to analyze the very largest datasets we studied, so that only moderately accurate alignment methods can be used on the largest datasets. As a result, alignments computed for large datasets have relatively large error rates, and maximum likelihood phylogenies computed on these alignments also have high error rates. Therefore, the estimation of highly accurate multiple sequence alignments is a major challenge for Tree of Life projects, and more generally for large-scale systematics studies. PMID:21113338

  15. Can International Large-Scale Assessments Inform a Global Learning Goal? Insights from the Learning Metrics Task Force

    ERIC Educational Resources Information Center

    Winthrop, Rebecca; Simons, Kate Anderson

    2013-01-01

    In recent years, the global community has developed a range of initiatives to inform the post-2015 global development agenda. In the education community, International Large-Scale Assessments (ILSAs) have an important role to play in advancing a global shift in focus to access plus learning. However, there are a number of other assessment tools…

  16. A Survey on Routing Protocols for Large-Scale Wireless Sensor Networks

    PubMed Central

    Li, Changle; Zhang, Hanxiao; Hao, Binbin; Li, Jiandong

    2011-01-01

    With the advances in micro-electronics, wireless sensor devices have been made much smaller and more integrated, and large-scale wireless sensor networks (WSNs) based the cooperation among the significant amount of nodes have become a hot topic. “Large-scale” means mainly large area or high density of a network. Accordingly the routing protocols must scale well to the network scope extension and node density increases. A sensor node is normally energy-limited and cannot be recharged, and thus its energy consumption has a quite significant effect on the scalability of the protocol. To the best of our knowledge, currently the mainstream methods to solve the energy problem in large-scale WSNs are the hierarchical routing protocols. In a hierarchical routing protocol, all the nodes are divided into several groups with different assignment levels. The nodes within the high level are responsible for data aggregation and management work, and the low level nodes for sensing their surroundings and collecting information. The hierarchical routing protocols are proved to be more energy-efficient than flat ones in which all the nodes play the same role, especially in terms of the data aggregation and the flooding of the control packets. With focus on the hierarchical structure, in this paper we provide an insight into routing protocols designed specifically for large-scale WSNs. According to the different objectives, the protocols are generally classified based on different criteria such as control overhead reduction, energy consumption mitigation and energy balance. In order to gain a comprehensive understanding of each protocol, we highlight their innovative ideas, describe the underlying principles in detail and analyze their advantages and disadvantages. Moreover a comparison of each routing protocol is conducted to demonstrate the differences between the protocols in terms of message complexity, memory requirements, localization, data aggregation, clustering manner

  17. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect

    Ghattas, Omar

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.

  18. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a

  19. An Assessment of Critical Dimension Small Angle X-ray Scattering Metrology for Advanced Semiconductor Manufacturing

    NASA Astrophysics Data System (ADS)

    Settens, Charles M.

    Simultaneous migration of planar transistors to FinFET architectures, the introduction of a plurality of materials to ensure suitable electrical characteristics, and the establishment of reliable multiple patterning lithography schemes to pattern sub-10 nm feature sizes imposes formidable challenges to current in-line dimensional metrologies. Because the shape of a FinFET channel cross-section immediately influences the electrical characteristics, the evaluation of 3D device structures requires measurement of parameters beyond traditional critical dimension (CD), including their sidewall angles, top corner rounding and footing, roughness, recesses and undercuts at single nanometer dimensions; thus, metrologies require sub-nm and approaching atomic level measurement uncertainty. Synchrotron critical dimension small angle X-ray scattering (CD-SAXS) has unique capabilities to non-destructively monitor the cross-section shape of surface structures with single nanometer uncertainty and can perform overlay metrology to sub-nm uncertainty. In this dissertation, we perform a systematic experimental investigation using CD-SAXS metrology on a hierarchy of semiconductor 3D device architectures including, high-aspect-ratio contact holes, H 2 annealed Si fins, and a series of grating type samples at multiple points along a FinFET fabrication process increasing in structural intricacy and ending with fully fabricated FinFET. Comparative studies between CD-SAXS metrology and other relevant semiconductor dimensional metrologies, particularly CD-SEM, CD-AFM and TEM are used to determine physical limits of CD-SAXS approach for advanced semiconductor samples. CD-SAXS experimental tradeoffs, advice for model-dependent analysis and thoughts on the compatibility with a semiconductor manufacturing environment are discussed.

  20. Manufacturing technologies

    NASA Astrophysics Data System (ADS)

    The Manufacturing Technologies Center is at the core of Sandia National Laboratories' advanced manufacturing effort which spans the entire product realization process. The center's capabilities in product and process development are summarized in the following disciplines: (1) mechanical - rapid prototyping, manufacturing engineering, machining and computer-aided manufacturing, measurement and calibration, and mechanical and electronic manufacturing liaison; (2) electronics - advanced packaging for microelectronics, printed circuits, and electronic fabrication; and (3) materials - ceramics, glass, thin films, vacuum technology, brazing, polymers, adhesives, composite materials, and process analysis.

  1. Applications of Data Assimilation to Analysis of the Ocean on Large Scales

    NASA Technical Reports Server (NTRS)

    Miller, Robert N.; Busalacchi, Antonio J.; Hackert, Eric C.

    1997-01-01

    It is commonplace to begin talks on this topic by noting that oceanographic data are too scarce and sparse to provide complete initial and boundary conditions for large-scale ocean models. Even considering the availability of remotely-sensed data such as radar altimetry from the TOPEX and ERS-1 satellites, a glance at a map of available subsurface data should convince most observers that this is still the case. Data are still too sparse for comprehensive treatment of interannual to interdecadal climate change through the use of models, since the new data sets have not been around for very long. In view of the dearth of data, we must note that the overall picture is changing rapidly. Recently, there have been a number of large scale ocean analysis and prediction efforts, some of which now run on an operational or at least quasi-operational basis, most notably the model based analyses of the tropical oceans. These programs are modeled on numerical weather prediction. Aside from the success of the global tide models, assimilation of data in the tropics, in support of prediction and analysis of seasonal to interannual climate change, is probably the area of large scale ocean modeling and data assimilation in which the most progress has been made. Climate change is a problem which is particularly suited to advanced data assimilation methods. Linear models are useful, and the linear theory can be exploited. For the most part, the data are sufficiently sparse that implementation of advanced methods is worthwhile. As an example of a large scale data assimilation experiment with a recent extensive data set, we present results of a tropical ocean experiment in which the Kalman filter was used to assimilate three years of altimetric data from Geosat into a coarsely resolved linearized long wave shallow water model. Since nonlinear processes dominate the local dynamic signal outside the tropics, subsurface dynamical quantities cannot be reliably inferred from surface height

  2. Constraints on large-scale dark acoustic oscillations from cosmology

    NASA Astrophysics Data System (ADS)

    Cyr-Racine, Francis-Yan; de Putter, Roland; Raccanelli, Alvise; Sigurdson, Kris

    2014-03-01

    If all or a fraction of the dark matter (DM) were coupled to a bath of dark radiation (DR) in the early Universe, we expect the combined DM-DR system to give rise to acoustic oscillations of the dark matter until it decouples from the DR. Much like the standard baryon acoustic oscillations, these dark acoustic oscillations (DAO) imprint a characteristic scale, the sound horizon of dark matter, on the matter power spectrum. We compute in detail how the microphysics of the DM-DR interaction affects the clustering of matter in the Universe and show that the DAO physics also gives rise to unique signatures in the temperature and polarization spectra of the cosmic microwave background (CMB). We use cosmological data from the CMB, baryon acoustic oscillations, and large-scale structure to constrain the possible fraction of interacting DM as well as the strength of its interaction with DR. Like nearly all knowledge we have gleaned about DM since inferring its existence this constraint rests on the betrayal by gravity of the location of otherwise invisible DM. Although our results can be straightforwardly applied to a broad class of models that couple dark matter particles to various light relativistic species, in order to make quantitative predictions, we model the interacting component as dark atoms coupled to a bath of dark photons. We find that linear cosmological data and CMB lensing put strong constraints on the existence of DAO features in the CMB and the large-scale structure of the Universe. Interestingly, we find that at most ˜5% of all DM can be very strongly interacting with DR. We show that our results are surprisingly constraining for the recently proposed double-disk DM model, a novel example of how large-scale precision cosmological data can be used to constrain galactic physics and subgalactic structure.

  3. Solving large scale structure in ten easy steps with COLA

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J.

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 109Msolar/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 1011Msolar/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  4. Large-Scale Hybrid Motor Testing. Chapter 10

    NASA Technical Reports Server (NTRS)

    Story, George

    2006-01-01

    Hybrid rocket motors can be successfully demonstrated at a small scale virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory scale motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for large scale applications are - how do they scale and has it been shown in a large motor? To answer those questions, large scale motor testing is required to verify the hybrid motor at its true size. The necessity to conduct large-scale hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small scale hybrid data to that of larger scale data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the large, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and scaling concepts that went into the development of those large motors.

  5. LARGE-SCALE CO2 TRANSPORTATION AND DEEP OCEAN SEQUESTRATION

    SciTech Connect

    Hamid Sarv

    1999-03-01

    Technical and economical feasibility of large-scale CO{sub 2} transportation and ocean sequestration at depths of 3000 meters or grater was investigated. Two options were examined for transporting and disposing the captured CO{sub 2}. In one case, CO{sub 2} was pumped from a land-based collection center through long pipelines laid on the ocean floor. Another case considered oceanic tanker transport of liquid carbon dioxide to an offshore floating structure for vertical injection to the ocean floor. In the latter case, a novel concept based on subsurface towing of a 3000-meter pipe, and attaching it to the offshore structure was considered. Budgetary cost estimates indicate that for distances greater than 400 km, tanker transportation and offshore injection through a 3000-meter vertical pipe provides the best method for delivering liquid CO{sub 2} to deep ocean floor depressions. For shorter distances, CO{sub 2} delivery by parallel-laid, subsea pipelines is more cost-effective. Estimated costs for 500-km transport and storage at a depth of 3000 meters by subsea pipelines and tankers were 1.5 and 1.4 dollars per ton of stored CO{sub 2}, respectively. At these prices, economics of ocean disposal are highly favorable. Future work should focus on addressing technical issues that are critical to the deployment of a large-scale CO{sub 2} transportation and disposal system. Pipe corrosion, structural design of the transport pipe, and dispersion characteristics of sinking CO{sub 2} effluent plumes have been identified as areas that require further attention. Our planned activities in the next Phase include laboratory-scale corrosion testing, structural analysis of the pipeline, analytical and experimental simulations of CO{sub 2} discharge and dispersion, and the conceptual economic and engineering evaluation of large-scale implementation.

  6. Solving large scale structure in ten easy steps with COLA

    SciTech Connect

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J. E-mail: matiasz@ias.edu

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  7. Infectious diseases in large-scale cat hoarding investigations.

    PubMed

    Polak, K C; Levy, J K; Crawford, P C; Leutenegger, C M; Moriello, K A

    2014-08-01

    Animal hoarders accumulate animals in over-crowded conditions without adequate nutrition, sanitation, and veterinary care. As a result, animals rescued from hoarding frequently have a variety of medical conditions including respiratory infections, gastrointestinal disease, parasitism, malnutrition, and other evidence of neglect. The purpose of this study was to characterize the infectious diseases carried by clinically affected cats and to determine the prevalence of retroviral infections among cats in large-scale cat hoarding investigations. Records were reviewed retrospectively from four large-scale seizures of cats from failed sanctuaries from November 2009 through March 2012. The number of cats seized in each case ranged from 387 to 697. Cats were screened for feline leukemia virus (FeLV) and feline immunodeficiency virus (FIV) in all four cases and for dermatophytosis in one case. A subset of cats exhibiting signs of upper respiratory disease or diarrhea had been tested for infections by PCR and fecal flotation for treatment planning. Mycoplasma felis (78%), calicivirus (78%), and Streptococcus equi subspecies zooepidemicus (55%) were the most common respiratory infections. Feline enteric coronavirus (88%), Giardia (56%), Clostridium perfringens (49%), and Tritrichomonas foetus (39%) were most common in cats with diarrhea. The seroprevalence of FeLV and FIV were 8% and 8%, respectively. In the one case in which cats with lesions suspicious for dermatophytosis were cultured for Microsporum canis, 69/76 lesional cats were culture-positive; of these, half were believed to be truly infected and half were believed to be fomite carriers. Cats from large-scale hoarding cases had high risk for enteric and respiratory infections, retroviruses, and dermatophytosis. Case responders should be prepared for mass treatment of infectious diseases and should implement protocols to prevent transmission of feline or zoonotic infections during the emergency response and when

  8. Statistical Modeling of Large-Scale Scientific Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Baldwin, C; Abdulla, G; Critchlow, T

    2003-11-15

    With the advent of massively parallel computer systems, scientists are now able to simulate complex phenomena (e.g., explosions of a stars). Such scientific simulations typically generate large-scale data sets over the spatio-temporal space. Unfortunately, the sheer sizes of the generated data sets make efficient exploration of them impossible. Constructing queriable statistical models is an essential step in helping scientists glean new insight from their computer simulations. We define queriable statistical models to be descriptive statistics that (1) summarize and describe the data within a user-defined modeling error, and (2) are able to answer complex range-based queries over the spatiotemporal dimensions. In this chapter, we describe systems that build queriable statistical models for large-scale scientific simulation data sets. In particular, we present our Ad-hoc Queries for Simulation (AQSim) infrastructure, which reduces the data storage requirements and query access times by (1) creating and storing queriable statistical models of the data at multiple resolutions, and (2) evaluating queries on these models of the data instead of the entire data set. Within AQSim, we focus on three simple but effective statistical modeling techniques. AQSim's first modeling technique (called univariate mean modeler) computes the ''true'' (unbiased) mean of systematic partitions of the data. AQSim's second statistical modeling technique (called univariate goodness-of-fit modeler) uses the Andersen-Darling goodness-of-fit method on systematic partitions of the data. Finally, AQSim's third statistical modeling technique (called multivariate clusterer) utilizes the cosine similarity measure to cluster the data into similar groups. Our experimental evaluations on several scientific simulation data sets illustrate the value of using these statistical models on large-scale simulation data sets.

  9. Improving Design Efficiency for Large-Scale Heterogeneous Circuits

    NASA Astrophysics Data System (ADS)

    Gregerson, Anthony

    Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency

  10. Solving Large-scale Eigenvalue Problems in SciDACApplications

    SciTech Connect

    Yang, Chao

    2005-06-29

    Large-scale eigenvalue problems arise in a number of DOE applications. This paper provides an overview of the recent development of eigenvalue computation in the context of two SciDAC applications. We emphasize the importance of Krylov subspace methods, and point out its limitations. We discuss the value of alternative approaches that are more amenable to the use of preconditioners, and report the progression using the multi-level algebraic sub-structuring techniques to speed up eigenvalue calculation. In addition to methods for linear eigenvalue problems, we also examine new approaches to solving two types of non-linear eigenvalue problems arising from SciDAC applications.

  11. Towards large scale production and separation of carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Alvarez, Noe T.

    Since their discovery, carbon nanotubes (CNTs) have boosted the research and applications of nanotechnology; however, many applications of CNTs are inaccessible because they depend upon large-scale CNT production and separations. Type, chirality and diameter control of CNTs determine many of their physical properties, and such control is still not accesible. This thesis studies the fundamentals for scalable selective reactions of HiPCo CNTs as well as the early phase of routes to an inexpensive approach for large-scale CNT production. In the growth part, this thesis covers a complete wet-chemistry process of catalyst and catalyst support deposition for growth of vertically aligned (VA) CNTs. A wet-chemistry preparation process has significant importance for CNT synthesis through chemical vapor deposition (CVD). CVD is by far, the most suitable and inexpensive process for large-scale CNT production when compared to other common processes such as laser ablation and arc discharge. However, its potential has been limited by low-yielding and difficult preparation processes of catalyst and its support, therefore its competitiveness has been reduced. The wet-chemistry process takes advantage of current nanoparticle technology to deposit the catalyst and the catalyst support as a thin film of nanoparticles, making the protocol simple compared to electron beam evaporation and sputtering processes. In the CNT selective reactions part, this thesis studies UV irradiation of individually dispersed HiPCo CNTs that generates auto-selective reactions in the liquid phase with good control over their diameter and chirality. This technique is ideal for large-scale and continuous-process of separations of CNTs by diameter and type. Additionally, an innovative simple catalyst deposition through abrasion is demonstrated. Simple friction between the catalyst and the substrates deposit a high enough density of metal catalyst particles for successful CNT growth. This simple approach has

  12. Why large-scale seasonal streamflow forecasts are feasible

    NASA Astrophysics Data System (ADS)

    Bierkens, M. F.; Candogan Yossef, N.; Van Beek, L. P.

    2011-12-01

    Seasonal forecasts of precipitation and temperature, using either statistical or dynamic prediction, have been around for almost 2 decades. The skill of these forecasts differ both in space and time, with highest skill in areas heavily influenced by SST anomalies such as El Nino or areas where land surface properties have a major impact on e.g. Monsoon strength, such as the vegetation cover of the Sahel region or the snow cover of the Tibetan plateau. However, the skill of seasonal forecasts is limited in most regions, with anomaly correlation coefficients varying between 0.2 and 0.5 for 1-3 month precipitation totals. This raises the question whether seasonal hydrological forecasting is feasible. Here, we make the case that it is. Using the example of statistical forecasts of NAO-strength and related precipitation anomalies over Europe, we show that the skill of large-scale streamflow forecasts is generally much higher than the precipitation forecasts itself, provided that the initial state of the system is accurately estimated. In the latter case, even the precipitation climatology can produce skillful results. This is due to the inertia of the hydrological system rooted in the storage of soil moisture, groundwater and snow pack, as corroborated by a recent study using snow observations for seasonal streamflow forecasting in the Western US. These examples seem to suggest that for accurate seasonal hydrological forecasting, correct state estimation is more important than accurate seasonal meteorological forecasts. However, large-scale estimation of hydrological states is difficult and validation of large-scale hydrological models often reveals large biases in e.g. streamflow estimates. Fortunately, as shown with a validation study of the global model PCR-GLOBWB, these biases are of less importance when seasonal forecasts are evaluated in terms of their ability to reproduce anomalous flows and extreme events, i.e. by anomaly correlations or categorical quantile

  13. Quantum computation for large-scale image classification

    NASA Astrophysics Data System (ADS)

    Ruan, Yue; Chen, Hanwu; Tan, Jianing; Li, Xi

    2016-07-01

    Due to the lack of an effective quantum feature extraction method, there is currently no effective way to perform quantum image classification or recognition. In this paper, for the first time, a global quantum feature extraction method based on Schmidt decomposition is proposed. A revised quantum learning algorithm is also proposed that will classify images by computing the Hamming distance of these features. From the experimental results derived from the benchmark database Caltech 101, and an analysis of the algorithm, an effective approach to large-scale image classification is derived and proposed against the background of big data.

  14. UAV Data Processing for Large Scale Topographical Mapping

    NASA Astrophysics Data System (ADS)

    Tampubolon, W.; Reinhardt, W.

    2014-06-01

    Large scale topographical mapping in the third world countries is really a prominent challenge in geospatial industries nowadays. On one side the demand is significantly increasing while on the other hand it is constrained by limited budgets available for mapping projects. Since the advent of Act Nr.4/yr.2011 about Geospatial Information in Indonesia, large scale topographical mapping has been on high priority for supporting the nationwide development e.g. detail spatial planning. Usually large scale topographical mapping relies on conventional aerial survey campaigns in order to provide high resolution 3D geospatial data sources. Widely growing on a leisure hobby, aero models in form of the so-called Unmanned Aerial Vehicle (UAV) bring up alternative semi photogrammetric aerial data acquisition possibilities suitable for relatively small Area of Interest (AOI) i.e. <5,000 hectares. For detail spatial planning purposes in Indonesia this area size can be used as a mapping unit since it usually concentrates on the basis of sub district area (kecamatan) level. In this paper different camera and processing software systems will be further analyzed for identifying the best optimum UAV data acquisition campaign components in combination with the data processing scheme. The selected AOI is covering the cultural heritage of Borobudur Temple as one of the Seven Wonders of the World. A detailed accuracy assessment will be concentrated within the object feature of the temple at the first place. Feature compilation involving planimetric objects (2D) and digital terrain models (3D) will be integrated in order to provide Digital Elevation Models (DEM) as the main interest of the topographic mapping activity. By doing this research, incorporating the optimum amount of GCPs in the UAV photo data processing will increase the accuracy along with its high resolution in 5 cm Ground Sampling Distance (GSD). Finally this result will be used as the benchmark for alternative geospatial

  15. Inflation in de Sitter spacetime and CMB large scale anomaly

    NASA Astrophysics Data System (ADS)

    Zhao, Dong; Li, Ming-Hua; Wang, Ping; Chang, Zhe

    2015-09-01

    The influence of cosmological constant-type dark energy in the early universe is investigated. This is accommodated by a new dispersion relation in de Sitter spacetime. We perform a global fit to explore the cosmological parameter space by using the CosmoMC package with the recently released Planck TT and WMAP polarization datasets. Using the results from the global fit, we compute a new CMB temperature-temperature (TT) spectrum. The obtained TT spectrum has lower power compared with that based on the ACDM model at large scales. Supported by National Natural Science Foundation of China (11375203)

  16. Large scale obscuration and related climate effects open literature bibliography

    SciTech Connect

    Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.

    1994-05-01

    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ``Nuclear Winter Controversy`` in the early 1980`s. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest.

  17. Large-scale genotoxicity assessments in the marine environment.

    PubMed

    Hose, J E

    1994-12-01

    There are a number of techniques for detecting genotoxicity in the marine environment, and many are applicable to large-scale field assessments. Certain tests can be used to evaluate responses in target organisms in situ while others utilize surrogate organisms exposed to field samples in short-term laboratory bioassays. Genotoxicity endpoints appear distinct from traditional toxicity endpoints, but some have chemical or ecotoxicologic correlates. One versatile end point, the frequency of anaphase aberrations, has been used in several large marine assessments to evaluate genotoxicity in the New York Bight, in sediment from San Francisco Bay, and following the Exxon Valdez oil spill. PMID:7713029

  18. Synthesis and sensing application of large scale bilayer graphene

    NASA Astrophysics Data System (ADS)

    Hong, Sung Ju; Yoo, Jung Hoon; Baek, Seung Jae; Park, Yung Woo

    2012-02-01

    We have synthesized large scale bilayer graphene by using Chemical Vapor Deposition (CVD) in atmospheric pressure. Bilayer graphene was grown by using CH4, H2 and Ar gases. The growth temperature was 1050^o. Conventional FET measurement shows ambipolar transfer characteristics. Results of Raman spectroscopy, Atomic Force microscope (AFM) and Transmission Electron Microscope (TEM) indicate the film is bilayer graphene. Especially, adlayer structure which interrupt uniformity was reduced in low methane flow condition. Furthermore, large size CVD bilayer graphene film can be investigated to apply sensor devices. By using conventional photolithography process, we have fabricated device array structure and studied sensing behavior.

  19. Implementation of Large Scale Integrated (LSI) circuit design software

    NASA Technical Reports Server (NTRS)

    Kuehlthau, R. L.; Pitts, E. R.

    1976-01-01

    Portions of the Computer Aided Design and Test system, a collection of Large Scale Integrated (LSI) circuit design programs were modified and upgraded. Major modifications were made to the Mask Analysis Program in the form of additional operating commands and file processing options. Modifications were also made to the Artwork Interactive Design System to correct some deficiencies in the original program as well as to add several new command features related to improving the response of AIDS when dealing with large files. The remaining work was concerned with updating various programs within CADAT to incorporate the silicon on sapphire silicon gate technology.

  20. Large-scale sodium spray fire code validation (SOFICOV) test

    SciTech Connect

    Jeppson, D.W.; Muhlestein, L.D.

    1985-01-01

    A large-scale, sodium, spray fire code validation test was performed in the HEDL 850-m/sup 3/ Containment System Test Facility (CSTF) as part of the Sodium Spray Fire Code Validation (SOFICOV) program. Six hundred fifty eight kilograms of sodium spray was sprayed in an air atmosphere for a period of 2400 s. The sodium spray droplet sizes and spray pattern distribution were estimated. The containment atmosphere temperature and pressure response, containment wall temperature response and sodium reaction rate with oxygen were measured. These results are compared to post-test predictions using SPRAY and NACOM computer codes.

  1. Radiative shocks on large scale lasers. Preliminary results

    NASA Astrophysics Data System (ADS)

    Leygnac, S.; Bouquet, S.; Stehle, C.; Barroso, P.; Batani, D.; Benuzzi, A.; Cathala, B.; Chièze, J.-P.; Fleury, X.; Grandjouan, N.; Grenier, J.; Hall, T.; Henry, E.; Koenig, M.; Lafon, J. P. J.; Malka, V.; Marchet, B.; Merdji, H.; Michaut, C.; Poles, L.; Thais, F.

    2001-05-01

    Radiative shocks, those structure is strongly influenced by the radiation field, are present in various astrophysical objects (circumstellar envelopes of variable stars, supernovae ...). Their modeling is very difficult and thus will take benefit from experimental informations. This approach is now possible using large scale lasers. Preliminary experiments have been performed with the nanosecond LULI laser at Ecole Polytechnique (France) in 2000. A radiative shock has been obtained in a low pressure xenon cell. The preparation of such experiments and their interpretation is performed using analytical calculations and numerical simulations.

  2. On the analysis of large-scale genomic structures.

    PubMed

    Oiwa, Nestor Norio; Goldman, Carla

    2005-01-01

    We apply methods from statistical physics (histograms, correlation functions, fractal dimensions, and singularity spectra) to characterize large-scale structure of the distribution of nucleotides along genomic sequences. We discuss the role of the extension of noncoding segments ("junk DNA") for the genomic organization, and the connection between the coding segment distribution and the high-eukaryotic chromatin condensation. The following sequences taken from GenBank were analyzed: complete genome of Xanthomonas campestri, complete genome of yeast, chromosome V of Caenorhabditis elegans, and human chromosome XVII around gene BRCA1. The results are compared with the random and periodic sequences and those generated by simple and generalized fractal Cantor sets. PMID:15858230

  3. Large-scale genotoxicity assessments in the marine environment

    SciTech Connect

    Hose, J.E.

    1994-12-01

    There are a number of techniques for detecting genotoxicity in the marine environment, and many are applicable to large-scale field assessments. Certain tests can be used to evaluate responses in target organisms in situ while others utilize surrogate organisms exposed to field samples in short-term laboratory bioassays. Genotoxicity endpoints appear distinct from traditional toxicity endpoints, but some have chemical or ecotoxicologic correlates. One versatile end point, the frequency of anaphase aberrations, has been used in several large marine assessments to evaluate genotoxicity in the New York Bight, in sediment from San Francisco Bay, and following the Exxon Valdez oil spill. 31 refs., 2 tabs.

  4. Floodplain management in Africa: Large scale analysis of flood data

    NASA Astrophysics Data System (ADS)

    Padi, Philip Tetteh; Baldassarre, Giuliano Di; Castellarin, Attilio

    2011-01-01

    To mitigate a continuously increasing flood risk in Africa, sustainable actions are urgently needed. In this context, we describe a comprehensive statistical analysis of flood data in the African continent. The study refers to quality-controlled, large and consistent databases of flood data, i.e. maximum discharge value and times series of annual maximum flows. Probabilistic envelope curves are derived for the African continent by means of a large scale regional analysis. Moreover, some initial insights on the statistical characteristics of African floods are provided. The results of this study are relevant and can be used to get some indications to support flood management in Africa.

  5. Novel algorithm of large-scale simultaneous linear equations.

    PubMed

    Fujiwara, T; Hoshi, T; Yamamoto, S; Sogabe, T; Zhang, S-L

    2010-02-24

    We review our recently developed methods of solving large-scale simultaneous linear equations and applications to electronic structure calculations both in one-electron theory and many-electron theory. This is the shifted COCG (conjugate orthogonal conjugate gradient) method based on the Krylov subspace, and the most important issue for applications is the shift equation and the seed switching method, which greatly reduce the computational cost. The applications to nano-scale Si crystals and the double orbital extended Hubbard model are presented. PMID:21386384

  6. A multilevel optimization of large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Sundareshan, M. K.

    1976-01-01

    A multilevel feedback control scheme is proposed for optimization of large-scale systems composed of a number of (not necessarily weakly coupled) subsystems. Local controllers are used to optimize each subsystem, ignoring the interconnections. Then, a global controller may be applied to minimize the effect of interconnections and improve the performance of the overall system. At the cost of suboptimal performance, this optimization strategy ensures invariance of suboptimality and stability of the systems under structural perturbations whereby subsystems are disconnected and again connected during operation.

  7. Design of a large-scale CFB boiler

    SciTech Connect

    Darling, S.; Li, S.

    1997-12-31

    Many CFB boilers sized 100--150 MWe are in operation, and several others sized 150--250 MWe are in operation or under construction. The next step for CFB technology is the 300--400 MWe size range. This paper will describe Foster Wheeler`s large-scale CFB boiler experience and the design for a 300 MWe CFB boiler. The authors will show how the design incorporates Foster Wheeler`s unique combination of extensive utility experience and CFB boiler experience. All the benefits of CFB technology which include low emissions, fuel flexibility, low maintenance and competitive cost are now available in the 300--400 MWe size range.

  8. [National Strategic Promotion for Large-Scale Clinical Cancer Research].

    PubMed

    Toyama, Senya

    2016-04-01

    The number of clinical research by clinical cancer study groups has been decreasing this year in Japan. They say the reason is the abolition of donations to the groups from the pharmaceutical companies after the Diovan scandal. But I suppose fundamental problem is that government-supported large-scale clinical cancer study system for evidence based medicine (EBM) has not been fully established. An urgent establishment of the system based on the national strategy is needed for the cancer patients and the public health promotion. PMID:27220800

  9. Large-Scale periodic solar velocities: An observational study

    NASA Technical Reports Server (NTRS)

    Dittmer, P. H.

    1977-01-01

    Observations of large-scale solar velocities were made using the mean field telescope and Babcock magnetograph of the Stanford Solar Observatory. Observations were made in the magnetically insensitive ion line at 5124 A, with light from the center (limb) of the disk right (left) circularly polarized, so that the magnetograph measures the difference in wavelength between center and limb. Computer calculations are made of the wavelength difference produced by global pulsations for spherical harmonics up to second order and of the signal produced by displacing the solar image relative to polarizing optics or diffraction grating.

  10. Laser Welding of Large Scale Stainless Steel Aircraft Structures

    NASA Astrophysics Data System (ADS)

    Reitemeyer, D.; Schultz, V.; Syassen, F.; Seefeld, T.; Vollertsen, F.

    In this paper a welding process for large scale stainless steel structures is presented. The process was developed according to the requirements of an aircraft application. Therefore, stringers are welded on a skin sheet in a t-joint configuration. The 0.6 mm thickness parts are welded with a thin disc laser, seam length up to 1920 mm are demonstrated. The welding process causes angular distortions of the skin sheet which are compensated by a subsequent laser straightening process. Based on a model straightening process parameters matching the induced welding distortion are predicted. The process combination is successfully applied to stringer stiffened specimens.

  11. Enabling Large-Scale Biomedical Analysis in the Cloud

    PubMed Central

    Lin, Ying-Chih; Yu, Chin-Sheng; Lin, Yen-Jen

    2013-01-01

    Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable. PMID:24288665

  12. Generation of Large-Scale Winds in Horizontally Anisotropic Convection.

    PubMed

    von Hardenberg, J; Goluskin, D; Provenzale, A; Spiegel, E A

    2015-09-25

    We simulate three-dimensional, horizontally periodic Rayleigh-Bénard convection, confined between free-slip horizontal plates and rotating about a distant horizontal axis. When both the temperature difference between the plates and the rotation rate are sufficiently large, a strong horizontal wind is generated that is perpendicular to both the rotation vector and the gravity vector. The wind is turbulent, large-scale, and vertically sheared. Horizontal anisotropy, engendered here by rotation, appears necessary for such wind generation. Most of the kinetic energy of the flow resides in the wind, and the vertical turbulent heat flux is much lower on average than when there is no wind. PMID:26451558

  13. Large-Scale Purification of Peroxisomes for Preparative Applications.

    PubMed

    Cramer, Jana; Effelsberg, Daniel; Girzalsky, Wolfgang; Erdmann, Ralf

    2015-09-01

    This protocol is designed for large-scale isolation of highly purified peroxisomes from Saccharomyces cerevisiae using two consecutive density gradient centrifugations. Instructions are provided for harvesting up to 60 g of oleic acid-induced yeast cells for the preparation of spheroplasts and generation of organellar pellets (OPs) enriched in peroxisomes and mitochondria. The OPs are loaded onto eight continuous 36%-68% (w/v) sucrose gradients. After centrifugation, the peak peroxisomal fractions are determined by measurement of catalase activity. These fractions are subsequently pooled and subjected to a second density gradient centrifugation using 20%-40% (w/v) Nycodenz. PMID:26330621

  14. Enabling large-scale biomedical analysis in the cloud.

    PubMed

    Lin, Ying-Chih; Yu, Chin-Sheng; Lin, Yen-Jen

    2013-01-01

    Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable. PMID:24288665

  15. Monochromatic waves induced by large-scale parametric forcing.

    PubMed

    Nepomnyashchy, A; Abarzhi, S I

    2010-03-01

    We study the formation and stability of monochromatic waves induced by large-scale modulations in the framework of the complex Ginzburg-Landau equation with parametric nonresonant forcing dependent on the spatial coordinate. In the limiting case of forcing with very large characteristic length scale, analytical solutions for the equation are found and conditions of their existence are outlined. Stability analysis indicates that the interval of existence of a monochromatic wave can contain a subinterval where the wave is stable. We discuss potential applications of the model in rheology, fluid dynamics, and optics. PMID:20365907

  16. Investigation of the Large Scale Evolution and Topology of Coronal Mass Ejections in the Solar Wind

    NASA Technical Reports Server (NTRS)

    Riley, Peter

    1999-01-01

    This investigation is concerned with the large-scale evolution and topology of Coronal Mass Ejections (CMEs) in the solar wind. During this reporting period we have analyzed a series of low density intervals in the ACE (Advanced Composition Explorer) plasma data set that bear many similarities to CMEs. We have begun a series of 3D, MHD (Magnetohydrodynamics) coronal models to probe potential causes of these events. We also edited two manuscripts concerning the properties of CMEs in the solar wind. One was re-submitted to the Journal of Geophysical Research.

  17. Contributions to the understanding of large-scale coherent structures in developing free turbulent shear flows

    NASA Technical Reports Server (NTRS)

    Liu, J. T. C.

    1986-01-01

    Advances in the mechanics of boundary layer flow are reported. The physical problems of large scale coherent structures in real, developing free turbulent shear flows, from the nonlinear aspects of hydrodynamic stability are addressed. The presence of fine grained turbulence in the problem, and its absence, lacks a small parameter. The problem is presented on the basis of conservation principles, which are the dynamics of the problem directed towards extracting the most physical information, however, it is emphasized that it must also involve approximations.

  18. Large-Scale Demonstration of Liquid Hydrogen Storage with Zero Boiloff for In-Space Applications

    NASA Technical Reports Server (NTRS)

    Hastings, L. J.; Bryant, C. B.; Flachbart, R. H.; Holt, K. A.; Johnson, E.; Hedayat, A.; Hipp, B.; Plachta, D. W.

    2010-01-01

    Cryocooler and passive insulation technology advances have substantially improved prospects for zero-boiloff cryogenic storage. Therefore, a cooperative effort by NASA s Ames Research Center, Glenn Research Center, and Marshall Space Flight Center (MSFC) was implemented to develop zero-boiloff concepts for in-space cryogenic storage. Described herein is one program element - a large-scale, zero-boiloff demonstration using the MSFC multipurpose hydrogen test bed (MHTB). A commercial cryocooler was interfaced with an existing MHTB spray bar mixer and insulation system in a manner that enabled a balance between incoming and extracted thermal energy.

  19. IMPROVEMENT OF WEAR COMPONENT'S PERFORMANCE BY UTILIZING ADVANCED MATERIALS AND NEW MANUFACTURING TECHNOLOGIES: CASTCON PROCESS FOR MINING APPLICATIONS

    SciTech Connect

    Xiaodi Huang; Richard Gertsch

    2005-02-04

    Michigan Technological University, together with The Robbins Group, Advanced Ceramic Research, Advanced Ceramic Manufacturing, and Superior Rock Bits, evaluated a new process and a new material for producing drill bit inserts and disc cutters for the mining industry. Difficulties in the material preparation stage slowed the research initially. Prototype testing of the drill bit inserts showed that the new inserts did not perform up to the current state of the art. Due to difficulties in the prototype production of the disc cutters, the disc cutter was manufactured but not tested. Although much promising information was obtained as a result of this project, the objective of developing an effective means for producing rock drill bits and rock disc cutters that last longer, increase energy efficiency and penetration rate, and lower overall production cost was not met.

  20. V1.6 Development of Advanced Manufacturing Technologies for Low Cost Hydrogen Storage Vessels

    SciTech Connect

    Leavitt, Mark; Lam, Patrick; Nelson, Karl M.; johnson, Brice A.; Johnson, Kenneth I.; Alvine, Kyle J.; Ruiz, Antonio; Adams, Jesse

    2012-10-01

    The goal of this project is to develop an innovative manufacturing process for Type IV high-pressure hydrogen storage vessels, with the intent to significantly lower manufacturing costs. Part of the development is to integrate the features of high precision AFP and commercial FW. Evaluation of an alternative fiber to replace a portion of the baseline fiber will help to reduce costs further.