Sample records for systems require large

  1. Advanced technology requirements for large space structures. Part 5: Atlas program requirements

    NASA Technical Reports Server (NTRS)

    Katz, E.; Lillenas, A. N.; Broddy, J. A.

    1977-01-01

    The results of a special study which identifies and assigns priorities to technology requirements needed to accomplish a particular scenario of future large area space systems are described. Proposed future systems analyzed for technology requirements included large Electronic Mail, Microwave Radiometer, and Radar Surveillance Satellites. Twenty technology areas were identified as requirements to develop the proposed space systems.

  2. Large Space Antenna Systems Technology, 1984

    NASA Technical Reports Server (NTRS)

    Boyer, W. J. (Compiler)

    1985-01-01

    Papers are presented which provide a comprehensive review of space missions requiring large antenna systems and of the status of key technologies required to enable these missions. Topic areas include mission applications for large space antenna systems, large space antenna structural systems, materials and structures technology, structural dynamics and control technology, electromagnetics technology, large space antenna systems and the space station, and flight test and evaluation.

  3. An informal paper on large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Ho, Y. C.

    1975-01-01

    Large scale systems are defined as systems requiring more than one decision maker to control the system. Decentralized control and decomposition are discussed for large scale dynamic systems. Information and many-person decision problems are analyzed.

  4. Technology requirements and readiness for very large vehicles

    NASA Technical Reports Server (NTRS)

    Conner, D. W.

    1979-01-01

    Common concerns of very large vehicles in the areas of economics, transportation system interfaces and operational problems were reviewed regarding their influence on vehicle configurations and technology. Fifty-four technology requirements were identified which are judged to be unique, or particularly critical, to very large vehicles. The requirements were about equally divided among the four general areas of aero/hydrodynamics, propulsion and acoustics, structures, and vehicle systems and operations. The state of technology readiness was judged to be poor to fair for slightly more than one half of the requirements. In the classic disciplinary areas, the state of technology readiness appears to be more advanced than for vehicle systems and operations.

  5. Large space systems technology electronics: Data and power distribution

    NASA Technical Reports Server (NTRS)

    Dunbar, W. G.

    1980-01-01

    The development of hardware technology and manufacturing techniques required to meet space platform and antenna system needs in the 1980s is discussed. Preliminary designs for manned and automatically assembled space power system cables, connectors, and grounding and bonding materials and techniques are reviewed. Connector concepts, grounding design requirements, and bonding requirements are discussed. The problem of particulate debris contamination for large structure spacecraft is addressed.

  6. Sensemaking in a Value Based Context for Large Scale Complex Engineered Systems

    NASA Astrophysics Data System (ADS)

    Sikkandar Basha, Nazareen

    The design and the development of Large-Scale Complex Engineered Systems (LSCES) requires the involvement of multiple teams and numerous levels of the organization and interactions with large numbers of people and interdisciplinary departments. Traditionally, requirements-driven Systems Engineering (SE) is used in the design and development of these LSCES. The requirements are used to capture the preferences of the stakeholder for the LSCES. Due to the complexity of the system, multiple levels of interactions are required to elicit the requirements of the system within the organization. Since LSCES involves people and interactions between the teams and interdisciplinary departments, it should be socio-technical in nature. The elicitation of the requirements of most large-scale system projects are subjected to creep in time and cost due to the uncertainty and ambiguity of requirements during the design and development. In an organization structure, the cost and time overrun can occur at any level and iterate back and forth thus increasing the cost and time. To avoid such creep past researches have shown that rigorous approaches such as value based designing can be used to control it. But before the rigorous approaches can be used, the decision maker should have a proper understanding of requirements creep and the state of the system when the creep occurs. Sensemaking is used to understand the state of system when the creep occurs and provide a guidance to decision maker. This research proposes the use of the Cynefin framework, sensemaking framework which can be used in the design and development of LSCES. It can aide in understanding the system and decision making to minimize the value gap due to requirements creep by eliminating ambiguity which occurs during design and development. A sample hierarchical organization is used to demonstrate the state of the system at the occurrence of requirements creep in terms of cost and time using the Cynefin framework. These trials are continued for different requirements and at different sub-system level. The results obtained show that the Cynefin framework can be used to improve the value of the system and can be used for predictive analysis. The decision makers can use these findings and use rigorous approaches and improve the design of Large Scale Complex Engineered Systems.

  7. LSST system analysis and integration task for an advanced science and application space platform

    NASA Technical Reports Server (NTRS)

    1980-01-01

    To support the development of an advanced science and application space platform (ASASP) requirements of a representative set of payloads requiring large separation distances selected from the Science and Applications Space Platform data base. These payloads were a 100 meter diameter atmospheric gravity wave antenna, a 100 meter by 100 meter particle beam injection experiment, a 2 meter diameter, 18 meter long astrometric telescope, and a 15 meter diameter, 35 meter long large ambient deployable IR telescope. A low earth orbit at 500 km altitude and 56 deg inclination was selected as being the best compromise for meeting payload requirements. Platform subsystems were defined which would support the payload requirements and a physical platform concept was developed. Structural system requirements which included utilities accommodation, interface requirements, and platform strength and stiffness requirements were developed. An attitude control system concept was also described. The resultant ASASP concept was analyzed and technological developments deemed necessary in the area of large space systems were recommended.

  8. Engineering large-scale agent-based systems with consensus

    NASA Technical Reports Server (NTRS)

    Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.

    1994-01-01

    The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.

  9. Precision requirements and innovative manufacturing for ultrahigh precision laser interferometry of gravitational-wave astronomy

    NASA Astrophysics Data System (ADS)

    Ni, Wei-Tou; Han, Sen; Jin, Tao

    2016-11-01

    With the LIGO announcement of the first direct detection of gravitational waves (GWs), the GW Astronomy was formally ushered into our age. After one-hundred years of theoretical investigation and fifty years of experimental endeavor, this is a historical landmark not just for physics and astronomy, but also for industry and manufacturing. The challenge and opportunity for industry is precision and innovative manufacturing in large size - production of large and homogeneous optical components, optical diagnosis of large components, high reflectance dielectric coating on large mirrors, manufacturing of components for ultrahigh vacuum of large volume, manufacturing of high attenuating vibration isolation system, production of high-power high-stability single-frequency lasers, production of high-resolution positioning systems etc. In this talk, we address the requirements and methods to satisfy these requirements. Optical diagnosis of large optical components requires large phase-shifting interferometer; the 1.06 μm Phase Shifting Interferometer for testing LIGO optics and the recently built 24" phase-shifting Interferometer in Chengdu, China are examples. High quality mirrors are crucial for laser interferometric GW detection, so as for ring laser gyroscope, high precision laser stabilization via optical cavities, quantum optomechanics, cavity quantum electrodynamics and vacuum birefringence measurement. There are stringent requirements on the substrate materials and coating methods. For cryogenic GW interferometer, appropriate coating on sapphire or silicon are required for good thermal and homogeneity properties. Large ultrahigh vacuum components and high attenuating vibration system together with an efficient metrology system are required and will be addressed. For space interferometry, drag-free technology and weak-light manipulation technology are must. Drag-free technology is well-developed. Weak-light phase locking is demonstrated in the laboratories while weak-light manipulation technology still needs developments.

  10. Control of large wind turbine generators connected to utility networks

    NASA Technical Reports Server (NTRS)

    Hinrichsen, E. N.

    1983-01-01

    This is an investigation of the control requirements for variable pitch wind turbine generators connected to electric power systems. The requirements include operation in very small as well as very large power systems. Control systems are developed for wind turbines with synchronous, induction, and doubly fed generators. Simulation results are presented. It is shown how wind turbines and power system controls can be integrated. A clear distinction is made between fast control of turbine torque, which is a peculiarity of wind turbines, and slow control of electric power, which is a traditional power system requirement.

  11. Large Deployable Reflector (LDR) feasibility study update

    NASA Technical Reports Server (NTRS)

    Alff, W. H.; Banderman, L. W.

    1983-01-01

    In 1982 a workshop was held to refine the science rationale for large deployable reflectors (LDR) and develop technology requirements that support the science rationale. At the end of the workshop, a set of LDR consensus systems requirements was established. The subject study was undertaken to update the initial LDR study using the new systems requirements. The study included mirror materials selection and configuration, thermal analysis, structural concept definition and analysis, dynamic control analysis and recommendations for further study. The primary emphasis was on the dynamic controls requirements and the sophistication of the controls system needed to meet LDR performance goals.

  12. Thermal control requirements for large space structures

    NASA Technical Reports Server (NTRS)

    Manoff, M.

    1978-01-01

    Performance capabilities and weight requirements of large space structure systems will be significantly influenced by thermal response characteristics. Analyses have been performed to determine temperature levels and gradients for structural configurations and elemental concepts proposed for advanced system applications ranging from relatively small, low-power communication antennas to extremely large, high-power Satellite Power Systems (SPS). Results are presented for selected platform configurations, candidate strut elements, and potential mission environments. The analyses also incorporate material and surface optical property variation. The results illustrate many of the thermal problems which may be encountered in the development of three systems.

  13. ARIES: Acquisition of Requirements and Incremental Evolution of Specifications

    NASA Technical Reports Server (NTRS)

    Roberts, Nancy A.

    1993-01-01

    This paper describes a requirements/specification environment specifically designed for large-scale software systems. This environment is called ARIES (Acquisition of Requirements and Incremental Evolution of Specifications). ARIES provides assistance to requirements analysts for developing operational specifications of systems. This development begins with the acquisition of informal system requirements. The requirements are then formalized and gradually elaborated (transformed) into formal and complete specifications. ARIES provides guidance to the user in validating formal requirements by translating them into natural language representations and graphical diagrams. ARIES also provides ways of analyzing the specification to ensure that it is correct, e.g., testing the specification against a running simulation of the system to be built. Another important ARIES feature, especially when developing large systems, is the sharing and reuse of requirements knowledge. This leads to much less duplication of effort. ARIES combines all of its features in a single environment that makes the process of capturing a formal specification quicker and easier.

  14. 49 CFR 37.185 - Fleet accessibility requirement for OTRB fixed-route systems of large operators.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false Fleet accessibility requirement for OTRB fixed-route systems of large operators. 37.185 Section 37.185 Transportation Office of the Secretary of Transportation TRANSPORTATION SERVICES FOR INDIVIDUALS WITH DISABILITIES (ADA) Over-the-Road Buses (OTRBs) § 37...

  15. In-space production of large space systems from extraterrestrial materials: A program implementation model

    NASA Technical Reports Server (NTRS)

    Vontiesenhausen, G. F.

    1977-01-01

    A program implementation model is presented which covers the in-space construction of certain large space systems from extraterrestrial materials. The model includes descriptions of major program elements and subelements and their operational requirements and technology readiness requirements. It provides a structure for future analysis and development.

  16. Study of auxiliary propulsion requirements for large space systems, volume 2

    NASA Technical Reports Server (NTRS)

    Smith, W. W.; Machles, G. W.

    1983-01-01

    A range of single shuttle launched large space systems were identified and characterized including a NASTRAN and loading dynamics analysis. The disturbance environment, characterization of thrust level and APS mass requirements, and a study of APS/LSS interactions were analyzed. State-of-the-art capabilities for chemical and ion propulsion were compared with the generated propulsion requirements to assess the state-of-the-art limitations and benefits of enhancing current technology.

  17. The Large Synoptic Survey Telescope OCS and TCS models

    NASA Astrophysics Data System (ADS)

    Schumacher, German; Delgado, Francisco

    2010-07-01

    The Large Synoptic Survey Telescope (LSST) is a project envisioned as a system of systems with demanding science, technical, and operational requirements, that must perform as a fully integrated unit. The design and implementation of such a system poses big engineering challenges when performing requirements analysis, detailed interface definitions, operational modes and control strategy studies. The OMG System Modeling Language (SysML) has been selected as the framework for the systems engineering analysis and documentation for the LSST. Models for the overall system architecture and different observatory subsystems have been built describing requirements, structure, interfaces and behavior. In this paper we show the models for the Observatory Control System (OCS) and the Telescope Control System (TCS), and how this methodology has helped in the clarification of the design and requirements. In one common language, the relationships of the OCS, TCS, Camera and Data management subsystems are captured with models of the structure, behavior, requirements and the traceability between them.

  18. Advanced optical sensing and processing technologies for the distributed control of large flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Williams, G. M.; Fraser, J. C.

    1991-01-01

    The objective was to examine state-of-the-art optical sensing and processing technology applied to control the motion of flexible spacecraft. Proposed large flexible space systems, such an optical telescopes and antennas, will require control over vast surfaces. Most likely distributed control will be necessary involving many sensors to accurately measure the surface. A similarly large number of actuators must act upon the system. The used technical approach included reviewing proposed NASA missions to assess system needs and requirements. A candidate mission was chosen as a baseline study spacecraft for comparison of conventional and optical control components. Control system requirements of the baseline system were used for designing both a control system containing current off-the-shelf components and a system utilizing electro-optical devices for sensing and processing. State-of-the-art surveys of conventional sensor, actuator, and processor technologies were performed. A technology development plan is presented that presents a logical, effective way to develop and integrate advancing technologies.

  19. Explicit solution techniques for impact with contact constraints

    NASA Technical Reports Server (NTRS)

    Mccarty, Robert E.

    1993-01-01

    Modern military aircraft transparency systems, windshields and canopies, are complex systems which must meet a large and rapidly growing number of requirements. Many of these transparency system requirements are conflicting, presenting difficult balances which must be achieved. One example of a challenging requirements balance or trade is shaping for stealth versus aircrew vision. The large number of requirements involved may be grouped in a variety of areas including man-machine interface; structural integration with the airframe; combat hazards; environmental exposures; and supportability. Some individual requirements by themselves pose very difficult, severely nonlinear analysis problems. One such complex problem is that associated with the dynamic structural response resulting from high energy bird impact. An improved analytical capability for soft-body impact simulation was developed.

  20. Explicit solution techniques for impact with contact constraints

    NASA Astrophysics Data System (ADS)

    McCarty, Robert E.

    1993-08-01

    Modern military aircraft transparency systems, windshields and canopies, are complex systems which must meet a large and rapidly growing number of requirements. Many of these transparency system requirements are conflicting, presenting difficult balances which must be achieved. One example of a challenging requirements balance or trade is shaping for stealth versus aircrew vision. The large number of requirements involved may be grouped in a variety of areas including man-machine interface; structural integration with the airframe; combat hazards; environmental exposures; and supportability. Some individual requirements by themselves pose very difficult, severely nonlinear analysis problems. One such complex problem is that associated with the dynamic structural response resulting from high energy bird impact. An improved analytical capability for soft-body impact simulation was developed.

  1. Cleared for Launch - Lessons Learned from the OSIRIS-REx System Requirements Verification Program

    NASA Technical Reports Server (NTRS)

    Stevens, Craig; Adams, Angela; Williams, Bradley; Goodloe, Colby

    2017-01-01

    Requirements verification of a large flight system is a challenge. It is especially challenging for engineers taking on their first role in space systems engineering. This paper describes our approach to verification of the Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer (OSIRIS-REx) system requirements. It also captures lessons learned along the way from developing systems engineers embroiled in this process. We begin with an overview of the mission and science objectives as well as the project requirements verification program strategy. A description of the requirements flow down is presented including our implementation for managing the thousands of program and element level requirements and associated verification data. We discuss both successes and methods to improve the managing of this data across multiple organizational interfaces. Our approach to verifying system requirements at multiple levels of assembly is presented using examples from our work at instrument, spacecraft, and ground segment levels. We include a discussion of system end-to-end testing limitations and their impacts to the verification program. Finally, we describe lessons learned that are applicable to all emerging space systems engineers using our unique perspectives across multiple organizations of a large NASA program.

  2. A Segmented Ion-Propulsion Engine

    NASA Technical Reports Server (NTRS)

    Brophy, John R.

    1992-01-01

    New design approach for high-power (100-kW class or greater) ion engines conceptually divides single engine into combination of smaller discharge chambers integrated to operate as single large engine. Analogous to multicylinder automobile engine, benefits include reduction in required accelerator system span-to-gap ratio for large-area engines, reduction in required hollow-cathode emission current, mitigation of plasma-uniformity problem, increased tolerance to accelerator system faults, and reduction in vacuum-system pumping speed.

  3. Extravehicular Crewman Work System (ECWS) study program. Volume 2: Construction

    NASA Technical Reports Server (NTRS)

    Wilde, R. C.

    1980-01-01

    The construction portion of the Extravehicular Crewman Work System Study defines the requirements and selects the concepts for the crewman work system required to support the construction of large structures in space.

  4. Implementing Parquet equations using HPX

    NASA Astrophysics Data System (ADS)

    Kellar, Samuel; Wagle, Bibek; Yang, Shuxiang; Tam, Ka-Ming; Kaiser, Hartmut; Moreno, Juana; Jarrell, Mark

    A new C++ runtime system (HPX) enables simulations of complex systems to run more efficiently on parallel and heterogeneous systems. This increased efficiency allows for solutions to larger simulations of the parquet approximation for a system with impurities. The relevancy of the parquet equations depends upon the ability to solve systems which require long runs and large amounts of memory. These limitations, in addition to numerical complications arising from stability of the solutions, necessitate running on large distributed systems. As the computational resources trend towards the exascale and the limitations arising from computational resources vanish efficiency of large scale simulations becomes a focus. HPX facilitates efficient simulations through intelligent overlapping of computation and communication. Simulations such as the parquet equations which require the transfer of large amounts of data should benefit from HPX implementations. Supported by the the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.

  5. Communication architecture for large geostationary platforms

    NASA Technical Reports Server (NTRS)

    Bond, F. E.

    1979-01-01

    Large platforms have been proposed for supporting multipurpose communication payloads to exploit economy of scale, reduce congestion in the geostationary orbit, provide interconnectivity between diverse earth stations, and obtain significant frequency reuse with large multibeam antennas. This paper addresses a specific system design, starting with traffic projections in the next two decades and discussing tradeoffs and design approaches for major components including: antennas, transponders, and switches. Other issues explored are selection of frequency bands, modulation, multiple access, switching methods, and techniques for servicing areas with nonuniform traffic demands. Three-major services are considered: a high-volume trunking system, a direct-to-user system, and a broadcast system for video distribution and similar functions. Estimates of payload weight and d.c. power requirements are presented. Other subjects treated are: considerations of equipment layout for servicing by an orbit transfer vehicle, mechanical stability requirements for the large antennas, and reliability aspects of the large number of transponders employed.

  6. Policy Driven Development: Flexible Policy Insertion for Large Scale Systems.

    PubMed

    Demchak, Barry; Krüger, Ingolf

    2012-07-01

    The success of a software system depends critically on how well it reflects and adapts to stakeholder requirements. Traditional development methods often frustrate stakeholders by creating long latencies between requirement articulation and system deployment, especially in large scale systems. One source of latency is the maintenance of policy decisions encoded directly into system workflows at development time, including those involving access control and feature set selection. We created the Policy Driven Development (PDD) methodology to address these development latencies by enabling the flexible injection of decision points into existing workflows at runtime , thus enabling policy composition that integrates requirements furnished by multiple, oblivious stakeholder groups. Using PDD, we designed and implemented a production cyberinfrastructure that demonstrates policy and workflow injection that quickly implements stakeholder requirements, including features not contemplated in the original system design. PDD provides a path to quickly and cost effectively evolve such applications over a long lifetime.

  7. Small Knowledge-Based Systems in Education and Training: Something New Under the Sun.

    ERIC Educational Resources Information Center

    Wilson, Brent G.; Welsh, Jack R.

    1986-01-01

    Discusses artificial intelligence, robotics, natural language processing, and expert or knowledge-based systems research; examines two large expert systems, MYCIN and XCON; and reviews the resources required to build large expert systems and affordable smaller systems (intelligent job aids) for training. Expert system vendors and products are…

  8. Requirements for a mobile communications satellite system. Volume 3: Large space structures measurements study

    NASA Technical Reports Server (NTRS)

    Akle, W.

    1983-01-01

    This study report defines a set of tests and measurements required to characterize the performance of a Large Space System (LSS), and to scale this data to other LSS satellites. Requirements from the Mobile Communication Satellite (MSAT) configurations derived in the parent study were used. MSAT utilizes a large, mesh deployable antenna, and encompasses a significant range of LSS technology issues in the areas of structural/dynamics, control, and performance predictability. In this study, performance requirements were developed for the antenna. Special emphasis was placed on antenna surface accuracy, and pointing stability. Instrumentation and measurement systems, applicable to LSS, were selected from existing or on-going technology developments. Laser ranging and angulation systems, presently in breadboard status, form the backbone of the measurements. Following this, a set of ground, STS, and GEO-operational were investigated. A third scale (15 meter) antenna system as selected for ground characterization followed by STS flight technology development. This selection ensures analytical scaling from ground-to-orbit, and size scaling. Other benefits are cost and ability to perform reasonable ground tests. Detail costing of the various tests and measurement systems were derived and are included in the report.

  9. Access control and privacy in large distributed systems

    NASA Technical Reports Server (NTRS)

    Leiner, B. M.; Bishop, M.

    1986-01-01

    Large scale distributed systems consists of workstations, mainframe computers, supercomputers and other types of servers, all connected by a computer network. These systems are being used in a variety of applications including the support of collaborative scientific research. In such an environment, issues of access control and privacy arise. Access control is required for several reasons, including the protection of sensitive resources and cost control. Privacy is also required for similar reasons, including the protection of a researcher's proprietary results. A possible architecture for integrating available computer and communications security technologies into a system that meet these requirements is described. This architecture is meant as a starting point for discussion, rather that the final answer.

  10. Design considerations for implementation of large scale automatic meter reading systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mak, S.; Radford, D.

    1995-01-01

    This paper discusses the requirements imposed on the design of an AMR system expected to serve a large (> 1 million) customer base spread over a large geographical area. Issues such as system throughput response time, and multi-application expendability are addressed, all of which are intimately dependent on the underlying communication system infrastructure, the local geography, the customer base, and the regulatory environment. A methodology for analysis, assessment, and design of large systems is presented. For illustration, two communication systems -- a low power RF/PLC system and a power frequency carrier system -- are analyzed and discussed.

  11. Industry/government seminar on Large Space systems technology: Executive summary

    NASA Technical Reports Server (NTRS)

    Scala, S. M.

    1978-01-01

    The critical technology developments which the participating experts recommend as being required to support the early generation large space systems envisioned as space missions during the years 1985-2000 are summarized.

  12. Primary propulsion/large space system interactions

    NASA Technical Reports Server (NTRS)

    Dergance, R. H.

    1980-01-01

    Three generic types of structural concepts and nonstructural surface densities were selected and combined to represent potential LSS applications. The design characteristics of various classes of large space systems that are impacted by primary propulsion thrust required to effect orbit transfer were identified. The effects of propulsion system thrust-to-mass ratio, thrust transients, and performance on the mass, area, and orbit transfer characteristics of large space systems were determined.

  13. Access Control Management for SCADA Systems

    NASA Astrophysics Data System (ADS)

    Hong, Seng-Phil; Ahn, Gail-Joon; Xu, Wenjuan

    The information technology revolution has transformed all aspects of our society including critical infrastructures and led a significant shift from their old and disparate business models based on proprietary and legacy environments to more open and consolidated ones. Supervisory Control and Data Acquisition (SCADA) systems have been widely used not only for industrial processes but also for some experimental facilities. Due to the nature of open environments, managing SCADA systems should meet various security requirements since system administrators need to deal with a large number of entities and functions involved in critical infrastructures. In this paper, we identify necessary access control requirements in SCADA systems and articulate access control policies for the simulated SCADA systems. We also attempt to analyze and realize those requirements and policies in the context of role-based access control that is suitable for simplifying administrative tasks in large scale enterprises.

  14. System Engineering of Autonomous Space Vehicles

    NASA Technical Reports Server (NTRS)

    Watson, Michael D.; Johnson, Stephen B.; Trevino, Luis

    2014-01-01

    Human exploration of the solar system requires fully autonomous systems when travelling more than 5 light minutes from Earth. This autonomy is necessary to manage a large, complex spacecraft with limited crew members and skills available. The communication latency requires the vehicle to deal with events with only limited crew interaction in most cases. The engineering of these systems requires an extensive knowledge of the spacecraft systems, information theory, and autonomous algorithm characteristics. The characteristics of the spacecraft systems must be matched with the autonomous algorithm characteristics to reliably monitor and control the system. This presents a large system engineering problem. Recent work on product-focused, elegant system engineering will be applied to this application, looking at the full autonomy stack, the matching of autonomous systems to spacecraft systems, and the integration of different types of algorithms. Each of these areas will be outlined and a general approach defined for system engineering to provide the optimal solution to the given application context.

  15. Radiometer requirements for Earth-observation systems using large space antennas

    NASA Technical Reports Server (NTRS)

    Keafer, L. S., Jr.; Harrington, R. F.

    1983-01-01

    Requirements are defined for Earth observation microwave radiometry for the decade of the 1990's by using large space antenna (LSA) systems with apertures in the range from 50 to 200 m. General Earth observation needs, specific measurement requirements, orbit mission guidelines and constraints, and general radiometer requirements are defined. General Earth observation needs are derived from NASA's basic space science program. Specific measurands include soil moisture, sea surface temperature, salinity, water roughness, ice boundaries, and water pollutants. Measurements are required with spatial resolution from 10 to 1 km and with temporal resolution from 3 days to 1 day. The primary orbit altitude and inclination ranges are 450 to 2200 km and 60 to 98 deg, respectively. Contiguous large scale coverage of several land and ocean areas over the globe dictates large (several hundred kilometers) swaths. Radiometer measurements are made in the bandwidth range from 1 to 37 GHz, preferably with dual polarization radiometers with a minimum of 90 percent beam efficiency. Reflector surface, root mean square deviation tolerances are in the wavelength range from 1/30 to 1/100.

  16. Large, horizontal-axis wind turbines

    NASA Technical Reports Server (NTRS)

    Linscott, B. S.; Perkins, P.; Dennett, J. T.

    1984-01-01

    Development of the technology for safe, reliable, environmentally acceptable large wind turbines that have the potential to generate a significant amount of electricity at costs competitive with conventional electric generating systems are presented. In addition, these large wind turbines must be fully compatible with electric utility operations and interface requirements. There are several ongoing large wind system development projects and applied research efforts directed toward meeting the technology requirements for utility applications. Detailed information on these projects is provided. The Mod-O research facility and current applied research effort in aerodynamics, structural dynamics and aeroelasticity, composite and hybrid composite materials, and multiple system interaction are described. A chronology of component research and technology development for large, horizontal axis wind turbines is presented. Wind characteristics, wind turbine economics, and the impact of wind turbines on the environment are reported. The need for continued wind turbine research and technology development is explored. Over 40 references are sited and a bibliography is included.

  17. Spacecraft radiators for advanced mission requirements

    NASA Technical Reports Server (NTRS)

    Leach, J. W.

    1980-01-01

    Design requirements for spacecraft heat rejection systems are identified, and their impact on the construction of conventional pumped fluid and hybrid heat pipe/pumped fluid radiators is evaluated. Heat rejection systems to improve the performance or reduce the cost of the spacecraft are proposed. Heat rejection requirements which are large compared to those of existing systems and mission durations which are relatively long, are discussed.

  18. Adaptive structures for precision controlled large space systems

    NASA Technical Reports Server (NTRS)

    Garba, John A.; Wada, Ben K.; Fanson, James L.

    1991-01-01

    The stringent accuracy and ground test validation requirements of some of the future space missions will require new approaches in structural design. Adaptive structures, structural systems that can vary their geometric congiguration as well as their physical properties, are primary candidates for meeting the functional requirements for such missions. Research performed in the development of such adaptive structural systems is described.

  19. Nonterrestrial material processing and manufacturing of large space systems

    NASA Technical Reports Server (NTRS)

    Von Tiesenhausen, G.

    1979-01-01

    Nonterrestrial processing of materials and manufacturing of large space system components from preprocessed lunar materials at a manufacturing site in space is described. Lunar materials mined and preprocessed at the lunar resource complex will be flown to the space manufacturing facility (SMF), where together with supplementary terrestrial materials, they will be final processed and fabricated into space communication systems, solar cell blankets, radio frequency generators, and electrical equipment. Satellite Power System (SPS) material requirements and lunar material availability and utilization are detailed, and the SMF processing, refining, fabricating facilities, material flow and manpower requirements are described.

  20. Test facilities for high power electric propulsion

    NASA Technical Reports Server (NTRS)

    Sovey, James S.; Vetrone, Robert H.; Grisnik, Stanley P.; Myers, Roger M.; Parkes, James E.

    1991-01-01

    Electric propulsion has applications for orbit raising, maneuvering of large space systems, and interplanetary missions. These missions involve propulsion power levels from tenths to tens of megawatts, depending upon the application. General facility requirements for testing high power electric propulsion at the component and thrust systems level are defined. The characteristics and pumping capabilities of many large vacuum chambers in the United States are reviewed and compared with the requirements for high power electric propulsion testing.

  1. Proceedings of the Workshop on Applications of Distributed System Theory to the Control of Large Space Structures

    NASA Technical Reports Server (NTRS)

    Rodriguez, G. (Editor)

    1983-01-01

    Two general themes in the control of large space structures are addressed: control theory for distributed parameter systems and distributed control for systems requiring spatially-distributed multipoint sensing and actuation. Topics include modeling and control, stabilization, and estimation and identification.

  2. Managing System of Systems Requirements with a Requirements Screening Group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald R. Barden

    2012-07-01

    Figuring out an effective and efficient way to manage not only your Requirement’s Baseline, but also the development of all your individual requirements during a Program’s/Project’s Conceptual and Development Life Cycle Stages can be both daunting and difficult. This is especially so when you are dealing with a complex and large System of Systems (SoS) Program with potentially thousands and thousands of Top Level Requirements as well as an equal number of lower level System, Subsystem and Configuration Item requirements that need to be managed. This task is made even more overwhelming when you have to add in integration withmore » multiple requirements’ development teams (e.g., Integrated Product Development Teams (IPTs)) and/or numerous System/Subsystem Design Teams. One solution for tackling this difficult activity on a recent large System of Systems Program was to develop and make use of a Requirements Screening Group (RSG). This group is essentially a Team made up of co-chairs from the various Stakeholders with an interest in the Program of record that are enabled and accountable for Requirements Development on the Program/Project. The RSG co-chairs, often with the help of individual support team, work together as a Program Board to monitor, make decisions on, and provide guidance on all Requirements Development activities during the Conceptual and Development Life Cycle Stages of a Program/Project. In addition, the RSG can establish and maintain the Requirements Baseline, monitor and enforce requirements traceability across the entire Program, and work with other elements of the Program/Project to ensure integration and coordination.« less

  3. Development and analysis of SCR requirements tables for system scenarios

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Morrison, Jeffery L.

    1995-01-01

    We describe the use of scenarios to develop and refine requirement tables for parts of the Earth Observing System Data and Information System (EOSDIS). The National Aeronautics and Space Administration (NASA) is developing EOSDIS as part of its Mission-To-Planet-Earth (MTPE) project to accept instrument/platform observation requests from end-user scientists, schedule and perform requested observations of the Earth from space, collect and process the observed data, and distribute data to scientists and archives. Current requirements for the system are managed with tools that allow developers to trace the relationships between requirements and other development artifacts, including other requirements. In addition, the user community (e.g., earth and atmospheric scientists), in conjunction with NASA, has generated scenarios describing the actions of EOSDIS subsystems in response to user requests and other system activities. As part of a research effort in verification and validation techniques, this paper describes our efforts to develop requirements tables from these scenarios for the EOSDIS Core System (ECS). The tables specify event-driven mode transitions based on techniques developed by the Naval Research Lab's (NRL) Software Cost Reduction (SCR) project. The SCR approach has proven effective in specifying requirements for large systems in an unambiguous, terse format that enhance identification of incomplete and inconsistent requirements. We describe development of SCR tables from user scenarios and identify the strengths and weaknesses of our approach in contrast to the requirements tracing approach. We also evaluate the capabilities of both approach to respond to the volatility of requirements in large, complex systems.

  4. Support of an Active Science Project by a Large Information System: Lessons for the EOS Era

    NASA Technical Reports Server (NTRS)

    Angelici, Gary L.; Skiles, J. W.; Popovici, Lidia Z.

    1993-01-01

    The ability of large information systems to support the changing data requirements of active science projects is being tested in a NASA collaborative study. This paper briefly profiles both the active science project and the large information system involved in this effort and offers some observations about the effectiveness of the project support. This is followed by lessons that are important for those participating in large information systems that need to support active science projects or that make available the valuable data produced by these projects. We learned in this work that it is difficult for a large information system focused on long term data management to satisfy the requirements of an on-going science project. For example, in order to provide the best service, it is important for all information system staff to keep focused on the needs and constraints of the scientists in the development of appropriate services. If the lessons learned in this and other science support experiences are not applied by those involved with large information systems of the EOS (Earth Observing System) era, then the final data products produced by future science projects may not be robust or of high quality, thereby making the conduct of the project science less efficacious and reducing the value of these unique suites of data for future research.

  5. Active large structures

    NASA Technical Reports Server (NTRS)

    Soosaar, K.

    1982-01-01

    Some performance requirements and development needs for the design of large space structures are described. Areas of study include: (1) dynamic response of large space structures; (2) structural control and systems integration; (3) attitude control; and (4) large optics and flexibility. Reference is made to a large space telescope.

  6. Knowledge representation by connection matrices: A method for the on-board implementation of large expert systems

    NASA Technical Reports Server (NTRS)

    Kellner, A.

    1987-01-01

    Extremely large knowledge sources and efficient knowledge access characterizing future real-life artificial intelligence applications represent crucial requirements for on-board artificial intelligence systems due to obvious computer time and storage constraints on spacecraft. A type of knowledge representation and corresponding reasoning mechanism is proposed which is particularly suited for the efficient processing of such large knowledge bases in expert systems.

  7. Control of Flexible Structures (COFS) Flight Experiment Background and Description

    NASA Technical Reports Server (NTRS)

    Hanks, B. R.

    1985-01-01

    A fundamental problem in designing and delivering large space structures to orbit is to provide sufficient structural stiffness and static configuration precision to meet performance requirements. These requirements are directly related to control requirements and the degree of control system sophistication available to supplement the as-built structure. Background and rationale are presented for a research study in structures, structural dynamics, and controls using a relatively large, flexible beam as a focus. This experiment would address fundamental problems applicable to large, flexible space structures in general and would involve a combination of ground tests, flight behavior prediction, and instrumented orbital tests. Intended to be multidisciplinary but basic within each discipline, the experiment should provide improved understanding and confidence in making design trades between structural conservatism and control system sophistication for meeting static shape and dynamic response/stability requirements. Quantitative results should be obtained for use in improving the validity of ground tests for verifying flight performance analyses.

  8. Damping characterization in large structures

    NASA Technical Reports Server (NTRS)

    Eke, Fidelis O.; Eke, Estelle M.

    1991-01-01

    This research project has as its main goal the development of methods for selecting the damping characteristics of components of a large structure or multibody system, in such a way as to produce some desired system damping characteristics. The main need for such an analytical device is in the simulation of the dynamics of multibody systems consisting, at least partially, of flexible components. The reason for this need is that all existing simulation codes for multibody systems require component-by-component characterization of complex systems, whereas requirements (including damping) often appear at the overall system level. The main goal was met in large part by the development of a method that will in fact synthesize component damping matrices from a given system damping matrix. The restrictions to the method are that the desired system damping matrix must be diagonal (which is almost always the case) and that interbody connections must be by simple hinges. In addition to the technical outcome, this project contributed positively to the educational and research infrastructure of Tuskegee University - a Historically Black Institution.

  9. Impact of large field angles on the requirements for deformable mirror in imaging satellites

    NASA Astrophysics Data System (ADS)

    Kim, Jae Jun; Mueller, Mark; Martinez, Ty; Agrawal, Brij

    2018-04-01

    For certain imaging satellite missions, a large aperture with wide field-of-view is needed. In order to achieve diffraction limited performance, the mirror surface Root Mean Square (RMS) error has to be less than 0.05 waves. In the case of visible light, it has to be less than 30 nm. This requirement is difficult to meet as the large aperture will need to be segmented in order to fit inside a launch vehicle shroud. To reduce this requirement and to compensate for the residual wavefront error, Micro-Electro-Mechanical System (MEMS) deformable mirrors can be considered in the aft optics of the optical system. MEMS deformable mirrors are affordable and consume low power, but are small in size. Due to the major reduction in pupil size for the deformable mirror, the effective field angle is magnified by the diameter ratio of the primary and deformable mirror. For wide field of view imaging, the required deformable mirror correction is field angle dependant, impacting the required parameters of a deformable mirror such as size, number of actuators, and actuator stroke. In this paper, a representative telescope and deformable mirror system model is developed and the deformable mirror correction is simulated to study the impact of the large field angles in correcting a wavefront error using a deformable mirror in the aft optics.

  10. Exploration Planetary Surface Structural Systems: Design Requirements and Compliance

    NASA Technical Reports Server (NTRS)

    Dorsey, John T.

    2011-01-01

    The Lunar Surface Systems Project developed system concepts that would be necessary to establish and maintain a permanent human presence on the Lunar surface. A variety of specific system implementations were generated as a part of the scenarios, some level of system definition was completed, and masses estimated for each system. Because the architecture studies generally spawned a large number of system concepts and the studies were executed in a short amount of time, the resulting system definitions had very low design fidelity. This paper describes the development sequence required to field a particular structural system: 1) Define Requirements, 2) Develop the Design and 3) Demonstrate Compliance of the Design to all Requirements. This paper also outlines and describes in detail the information and data that are required to establish structural design requirements and outlines the information that would comprise a planetary surface system Structures Requirements document.

  11. Earth observing system instrument pointing control modeling for polar orbiting platforms

    NASA Technical Reports Server (NTRS)

    Briggs, H. C.; Kia, T.; Mccabe, S. A.; Bell, C. E.

    1987-01-01

    An approach to instrument pointing control performance assessment for large multi-instrument platforms is described. First, instrument pointing requirements and reference platform control systems for the Eos Polar Platforms are reviewed. Performance modeling tools including NASTRAN models of two large platforms, a modal selection procedure utilizing a balanced realization method, and reduced order platform models with core and instrument pointing control loops added are then described. Time history simulations of instrument pointing and stability performance in response to commanded slewing of adjacent instruments demonstrates the limits of tolerable slew activity. Simplified models of rigid body responses are also developed for comparison. Instrument pointing control methods required in addition to the core platform control system to meet instrument pointing requirements are considered.

  12. Low-thrust chemical orbit transfer propulsion

    NASA Technical Reports Server (NTRS)

    Pelouch, J. J., Jr.

    1979-01-01

    The need for large structures in high orbit is reported in terms of the many mission opportunities which require such structures. Mission and transportation options for large structures are presented, and it is shown that low-thrust propulsion is an enabling requirement for some missions and greatly enhancing to many others. Electric and low-thrust chemical propulsion are compared, and the need for an requirements of low-thrust chemical propulsion are discussed in terms of the interactions that are perceived to exist between the propulsion system and the large structure.

  13. Reaction factoring and bipartite update graphs accelerate the Gillespie Algorithm for large-scale biochemical systems.

    PubMed

    Indurkhya, Sagar; Beal, Jacob

    2010-01-06

    ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models.

  14. Reaction Factoring and Bipartite Update Graphs Accelerate the Gillespie Algorithm for Large-Scale Biochemical Systems

    PubMed Central

    Indurkhya, Sagar; Beal, Jacob

    2010-01-01

    ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models. PMID:20066048

  15. Identifiability of conservative linear mechanical systems. [applied to large flexible spacecraft structures

    NASA Technical Reports Server (NTRS)

    Sirlin, S. W.; Longman, R. W.; Juang, J. N.

    1985-01-01

    With a sufficiently great number of sensors and actuators, any finite dimensional dynamic system is identifiable on the basis of input-output data. It is presently indicated that, for conservative nongyroscopic linear mechanical systems, the number of sensors and actuators required for identifiability is very large, where 'identifiability' is understood as a unique determination of the mass and stiffness matrices. The required number of sensors and actuators drops by a factor of two, given a relaxation of the identifiability criterion so that identification can fail only if the system parameters being identified lie in a set of measure zero. When the mass matrix is known a priori, this additional information does not significantly affect the requirements for guaranteed identifiability, though the number of parameters to be determined is reduced by a factor of two.

  16. Multiresource inventories incorporating GIS, GPS, and database management systems

    Treesearch

    Loukas G. Arvanitis; Balaji Ramachandran; Daniel P. Brackett; Hesham Abd-El Rasol; Xuesong Du

    2000-01-01

    Large-scale natural resource inventories generate enormous data sets. Their effective handling requires a sophisticated database management system. Such a system must be robust enough to efficiently store large amounts of data and flexible enough to allow users to manipulate a wide variety of information. In a pilot project, related to a multiresource inventory of the...

  17. Get It Together

    ERIC Educational Resources Information Center

    Coffey, Dave

    2006-01-01

    The scale of the mechanical and plumbing systems required to support a large, multi-building academic health sciences/research center entails a lot of ductwork. Getting mechanical systems installed and running while carrying out activities from other building disciplines requires a great deal of coordinated effort. A university and its…

  18. Tackling the challenges of fully immersive head-mounted AR devices

    NASA Astrophysics Data System (ADS)

    Singer, Wolfgang; Hillenbrand, Matthias; Münz, Holger

    2017-11-01

    The optical requirements of fully immersive head mounted AR devices are inherently determined by the human visual system. The etendue of the visual system is large. As a consequence, the requirements for fully immersive head-mounted AR devices exceeds almost any high end optical system. Two promising solutions to achieve the large etendue and their challenges are discussed. Head-mounted augmented reality devices have been developed for decades - mostly for application within aircrafts and in combination with a heavy and bulky helmet. The established head-up displays for applications within automotive vehicles typically utilize similar techniques. Recently, there is the vision of eyeglasses with included augmentation, offering a large field of view, and being unobtrusively all-day wearable. There seems to be no simple solution to reach the functional performance requirements. Known technical solutions paths seem to be a dead-end, and some seem to offer promising perspectives, however with severe limitations. As an alternative, unobtrusively all-day wearable devices with a significantly smaller field of view are already possible.

  19. Shape accuracy requirements on starshades for large and small apertures

    NASA Astrophysics Data System (ADS)

    Shaklan, Stuart B.; Marchen, Luis; Cady, Eric

    2017-09-01

    Starshades have been designed to work with large and small telescopes alike. With smaller telescopes, the targets tend to be brighter and closer to the Solar System, and their putative planetary systems span angles that require starshades with radii of 10-30 m at distances of 10s of Mm. With larger apertures, the light-collecting power enables studies of more numerous, fainter systems, requiring larger, more distant starshades with radii >50 m at distances of 100s of Mm. Characterization using infrared wavelengths requires even larger starshades. A mitigating approach is to observe planets between the petals, where one can observe regions closer to the star but with reduced throughput and increased instrument scatter. We compare the starshade shape requirements, including petal shape, petal positioning, and other key terms, for the WFIRST 26m starshade and the HABEX 72 m starshade concepts, over a range of working angles and telescope sizes. We also compare starshades having rippled and smooth edges and show that their performance is nearly identical.

  20. Testing and validation of computerized decision support systems.

    PubMed

    Sailors, R M; East, T D; Wallace, C J; Carlson, D A; Franklin, M A; Heermann, L K; Kinder, A T; Bradshaw, R L; Randolph, A G; Morris, A H

    1996-01-01

    Systematic, through testing of decision support systems (DSSs) prior to release to general users is a critical aspect of high quality software design. Omission of this step may lead to the dangerous, and potentially fatal, condition of relying on a system with outputs of uncertain quality. Thorough testing requires a great deal of effort and is a difficult job because tools necessary to facilitate testing are not well developed. Testing is a job ill-suited to humans because it requires tireless attention to a large number of details. For these reasons, the majority of DSSs available are probably not well tested prior to release. We have successfully implemented a software design and testing plan which has helped us meet our goal of continuously improving the quality of our DSS software prior to release. While requiring large amounts of effort, we feel that the process of documenting and standardizing our testing methods are important steps toward meeting recognized national and international quality standards. Our testing methodology includes both functional and structural testing and requires input from all levels of development. Our system does not focus solely on meeting design requirements but also addresses the robustness of the system and the completeness of testing.

  1. Intelligent redundant actuation system requirements and preliminary system design

    NASA Technical Reports Server (NTRS)

    Defeo, P.; Geiger, L. J.; Harris, J.

    1985-01-01

    Several redundant actuation system configurations were designed and demonstrated to satisfy the stringent operational requirements of advanced flight control systems. However, this has been accomplished largely through brute force hardware redundancy, resulting in significantly increased computational requirements on the flight control computers which perform the failure analysis and reconfiguration management. Modern technology now provides powerful, low-cost microprocessors which are effective in performing failure isolation and configuration management at the local actuator level. One such concept, called an Intelligent Redundant Actuation System (IRAS), significantly reduces the flight control computer requirements and performs the local tasks more comprehensively than previously feasible. The requirements and preliminary design of an experimental laboratory system capable of demonstrating the concept and sufficiently flexible to explore a variety of configurations are discussed.

  2. Shuttle cryogenic supply system. Optimization study. Volume 5 B-1: Programmers manual for math models

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A computer program for rapid parametric evaluation of various types of cryogenics spacecraft systems is presented. The mathematical techniques of the program provide the capability for in-depth analysis combined with rapid problem solution for the production of a large quantity of soundly based trade-study data. The program requires a large data bank capable of providing characteristics performance data for a wide variety of component assemblies used in cryogenic systems. The program data requirements are divided into: (1) the semipermanent data tables and source data for performance characteristics and (2) the variable input data which contains input parameters which may be perturbated for parametric system studies.

  3. Multimode ergometer system

    NASA Technical Reports Server (NTRS)

    Bynum, B. G.; Gause, R. L.; Spier, R. A.

    1971-01-01

    System overcomes previous ergometer design and calibration problems including inaccurate measurements, large weight, size, and input power requirements, poor heat dissipation, high flammability, and inaccurate calibration. Device consists of lightweight, accurately controlled ergometer, restraint system, and calibration system.

  4. Large Deployable Reflector Science and Technology Workshop. Volume 3: Systems and Technology Assessment. Introduction

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The Large Deployable Reflector (LDR), a proposed 20 m diameter telescope designed for infrared and submillimeter astronomical measurements from space, is discussed in terms of scientific purposes, capabilities, current status, and history of development. The LDR systems goals and functional/telescope requirements are enumerated.

  5. Reinventing the Solar Power Satellite

    NASA Technical Reports Server (NTRS)

    Landis, Geoffrey A.

    2002-01-01

    Economy of scale is inherent in the microwave power transmission aperture/spot-size trade-off, resulting in a requirement for large space systems in the existing design concepts. Unfortunately, this large size means that the initial investment required before the first return, and the price of amortization of this initial investment, is a daunting (and perhaps insurmountable) barrier to economic viability. As the growth of ground-based solar power applications will fund the development of the PV technology required for space solar power and will also create the demand for space solar power by manufacturing a ready-made market, space power systems must be designed with an understanding that ground-based solar technologies will be implemented as a precursor to space-based solar. for low initial cost, (3) operation in synergy with ground solar systems, and (4) power production profile tailored to peak rates. A key to simplicity of design is to maximize the integration of the system components. Microwave, millimeter-wave, and laser systems are analyzed. A new solar power satellite design concept with no sun-tracking and no moving parts is proposed to reduce the required cost to initial operational capability.

  6. Requirements for migration of NSSD code systems from LTSS to NLTSS

    NASA Technical Reports Server (NTRS)

    Pratt, M.

    1984-01-01

    The purpose of this document is to address the requirements necessary for a successful conversion of the Nuclear Design (ND) application code systems to the NLTSS environment. The ND application code system community can be characterized as large-scale scientific computation carried out on supercomputers. NLTSS is a distributed operating system being developed at LLNL to replace the LTSS system currently in use. The implications of change are examined including a description of the computational environment and users in ND. The discussion then turns to requirements, first in a general way, followed by specific requirements, including a proposal for managing the transition.

  7. An interfaces approach to TES ground data system processing design with the Science Investigator-led Processing System (SIPS)

    NASA Technical Reports Server (NTRS)

    Kurian, R.; Grifin, A.

    2002-01-01

    Developing production-quality software to process the large volumes of scientific data is the responsibility of the TES Ground Data System, which is being developed at the Jet Propulsion Laboratory together with support contractor Raytheon/ITSS. The large data volume and processing requirements of the TES pose significant challenges to the design.

  8. Workflow management in large distributed systems

    NASA Astrophysics Data System (ADS)

    Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.

    2011-12-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  9. Systems engineering in the Large Synoptic Survey Telescope project: an application of model based systems engineering

    NASA Astrophysics Data System (ADS)

    Claver, C. F.; Selvy, Brian M.; Angeli, George; Delgado, Francisco; Dubois-Felsmann, Gregory; Hascall, Patrick; Lotz, Paul; Marshall, Stuart; Schumacher, German; Sebag, Jacques

    2014-08-01

    The Large Synoptic Survey Telescope project was an early adopter of SysML and Model Based Systems Engineering practices. The LSST project began using MBSE for requirements engineering beginning in 2006 shortly after the initial release of the first SysML standard. Out of this early work the LSST's MBSE effort has grown to include system requirements, operational use cases, physical system definition, interfaces, and system states along with behavior sequences and activities. In this paper we describe our approach and methodology for cross-linking these system elements over the three classical systems engineering domains - requirement, functional and physical - into the LSST System Architecture model. We also show how this model is used as the central element to the overall project systems engineering effort. More recently we have begun to use the cross-linked modeled system architecture to develop and plan the system verification and test process. In presenting this work we also describe "lessons learned" from several missteps the project has had with MBSE. Lastly, we conclude by summarizing the overall status of the LSST's System Architecture model and our plans for the future as the LSST heads toward construction.

  10. The requirements for a new full scale subsonic wind tunnel

    NASA Technical Reports Server (NTRS)

    Kelly, M. W.; Mckinney, M. O.; Luidens, R. W.

    1972-01-01

    Justification and requirements are presented for a large subsonic wind tunnel capable of testing full scale aircraft, rotor systems, and advanced V/STOL propulsion systems. The design considerations and constraints for such a facility are reviewed, and the trades between facility test capability and costs are discussed.

  11. The role of artificial intelligence techniques in scheduling systems

    NASA Technical Reports Server (NTRS)

    Geoffroy, Amy L.; Britt, Daniel L.; Gohring, John R.

    1990-01-01

    Artificial Intelligence (AI) techniques provide good solutions for many of the problems which are characteristic of scheduling applications. However, scheduling is a large, complex heterogeneous problem. Different applications will require different solutions. Any individual application will require the use of a variety of techniques, including both AI and conventional software methods. The operational context of the scheduling system will also play a large role in design considerations. The key is to identify those places where a specific AI technique is in fact the preferable solution, and to integrate that technique into the overall architecture.

  12. Radiatively coupled thermionic and thermoelectric power system concept

    NASA Technical Reports Server (NTRS)

    Shimada, K.; Ewell, R.

    1981-01-01

    The study presented showed that the large power systems (about 100 kW) utilizing radiatively coupled thermionic or thermoelectric converters could be designed so that the power subsystem could be contained in a Space Shuttle bay as a part of an electrically propelled spacecraft. The radiatively coupled system requires a large number of individual converters since the transferred heat is smaller than with the conductively coupled system, but the advantages of the new system indicates merit for further study. The advantages are (1) good electrical isolation between converters and the heat source, (2) physical separation of converters from the heat source (making the system fabrication manageable), and (3) elimination of radiator heat pipes, which are required in an all-heat-pipe power system. In addition, the specific weight of the radiatively coupled power systems favorably compares with that of the all-heat-pipe systems.

  13. An Efficient and Versatile Means for Assembling and Manufacturing Systems in Space

    NASA Technical Reports Server (NTRS)

    Dorsey, John T.; Doggett, William R.; Hafley, Robert A.; Komendera, Erik; Correll, Nikolaus; King, Bruce

    2012-01-01

    Within NASA Space Science, Exploration and the Office of Chief Technologist, there are Grand Challenges and advanced future exploration, science and commercial mission applications that could benefit significantly from large-span and large-area structural systems. Of particular and persistent interest to the Space Science community is the desire for large (in the 10- 50 meter range for main aperture diameter) space telescopes that would revolutionize space astronomy. Achieving these systems will likely require on-orbit assembly, but previous approaches for assembling large-scale telescope truss structures and systems in space have been perceived as very costly because they require high precision and custom components. These components rely on a large number of mechanical connections and supporting infrastructure that are unique to each application. In this paper, a new assembly paradigm that mitigates these concerns is proposed and described. A new assembly approach, developed to implement the paradigm, is developed incorporating: Intelligent Precision Jigging Robots, Electron-Beam welding, robotic handling/manipulation, operations assembly sequence and path planning, and low precision weldable structural elements. Key advantages of the new assembly paradigm, as well as concept descriptions and ongoing research and technology development efforts for each of the major elements are summarized.

  14. Developing Software Requirements for a Knowledge Management System That Coordinates Training Programs with Business Processes and Policies in Large Organizations

    ERIC Educational Resources Information Center

    Kiper, J. Richard

    2013-01-01

    For large organizations, updating instructional programs presents a challenge to keep abreast of constantly changing business processes and policies. Each time a process or policy changes, significant resources are required to locate and modify the training materials that convey the new content. Moreover, without the ability to track learning…

  15. Intelligent systems engineering methodology

    NASA Technical Reports Server (NTRS)

    Fouse, Scott

    1990-01-01

    An added challenge for the designers of large scale systems such as Space Station Freedom is the appropriate incorporation of intelligent system technology (artificial intelligence, expert systems, knowledge-based systems, etc.) into their requirements and design. This presentation will describe a view of systems engineering which successfully addresses several aspects of this complex problem: design of large scale systems, design with requirements that are so complex they only completely unfold during the development of a baseline system and even then continue to evolve throughout the system's life cycle, design that involves the incorporation of new technologies, and design and development that takes place with many players in a distributed manner yet can be easily integrated to meet a single view of the requirements. The first generation of this methodology was developed and evolved jointly by ISX and the Lockheed Aeronautical Systems Company over the past five years on the Defense Advanced Research Projects Agency/Air Force Pilot's Associate Program, one of the largest, most complex, and most successful intelligent systems constructed to date. As the methodology has evolved it has also been applied successfully to a number of other projects. Some of the lessons learned from this experience may be applicable to Freedom.

  16. Proceedings of the Workshop on Large, Distributed, Parallel Architecture, Real-Time Systems Held in Alexandria, Virginia on 15-19 March 1993

    DTIC Science & Technology

    1993-07-01

    distributed system. Second, to support the development of scaleable end-use applications that implement the mission critical control policies of the...implementation. These and other cogent reasons suggest two important rules for designing large, distributed, realtime systems: i) separate policies required...system design rules. 0 The separation of system coordination and management policies and mechanisms allows for the "objectification" of the underlying

  17. On decentralized estimation. [for large linear systems

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Vukcevic, M. B.

    1978-01-01

    A multilevel scheme is proposed to construct decentralized estimators for large linear systems. The scheme is numerically attractive since only observability tests of low-order subsystems are required. Equally important is the fact that the constructed estimators are reliable under structural perturbations and can tolerate a wide range of nonlinearities in coupling among the subsystems.

  18. Nonterrestrial material processing and manufacturing of large space systems

    NASA Technical Reports Server (NTRS)

    Vontiesenhausen, G. F.

    1978-01-01

    An attempt is made to provide pertinent and readily usable information on the extraterrestrial processing of materials and manufacturing of components and elements of these planned large space systems from preprocessed lunar materials which are made available at a processing and manufacturing site in space. Required facilities, equipment, machinery, energy and manpower are defined.

  19. An Investigation of Energy Consumption and Cost in Large Air-Conditioned Buildings. An Interim Report.

    ERIC Educational Resources Information Center

    Milbank, N. O.

    Two similarly large buildings and air conditioning systems are comparatively analyzed as to energy consumption, costs, and inefficiency during certain measured periods of time. Building design and velocity systems are compared to heating, cooling, lighting and distribution capabilities. Energy requirements for pumps, fans and lighting are found to…

  20. Evolving from bioinformatics in-the-small to bioinformatics in-the-large.

    PubMed

    Parker, D Stott; Gorlick, Michael M; Lee, Christopher J

    2003-01-01

    We argue the significance of a fundamental shift in bioinformatics, from in-the-small to in-the-large. Adopting a large-scale perspective is a way to manage the problems endemic to the world of the small-constellations of incompatible tools for which the effort required to assemble an integrated system exceeds the perceived benefit of the integration. Where bioinformatics in-the-small is about data and tools, bioinformatics in-the-large is about metadata and dependencies. Dependencies represent the complexities of large-scale integration, including the requirements and assumptions governing the composition of tools. The popular make utility is a very effective system for defining and maintaining simple dependencies, and it offers a number of insights about the essence of bioinformatics in-the-large. Keeping an in-the-large perspective has been very useful to us in large bioinformatics projects. We give two fairly different examples, and extract lessons from them showing how it has helped. These examples both suggest the benefit of explicitly defining and managing knowledge flows and knowledge maps (which represent metadata regarding types, flows, and dependencies), and also suggest approaches for developing bioinformatics database systems. Generally, we argue that large-scale engineering principles can be successfully adapted from disciplines such as software engineering and data management, and that having an in-the-large perspective will be a key advantage in the next phase of bioinformatics development.

  1. Dense wavelength division multiplexing devices for metropolitan-area datacom and telecom networks

    NASA Astrophysics Data System (ADS)

    DeCusatis, Casimer M.; Priest, David G.

    2000-12-01

    Large data processing environments in use today can require multi-gigabyte or terabyte capacity in the data communication infrastructure; these requirements are being driven by storage area networks with access to petabyte data bases, new architecture for parallel processing which require high bandwidth optical links, and rapidly growing network applications such as electronic commerce over the Internet or virtual private networks. These datacom applications require high availability, fault tolerance, security, and the capacity to recover from any single point of failure without relying on traditional SONET-based networking. These requirements, coupled with fiber exhaust in metropolitan areas, are driving the introduction of dense optical wavelength division multiplexing (DWDM) in data communication systems, particularly for large enterprise servers or mainframes. In this paper, we examine the technical requirements for emerging nextgeneration DWDM systems. Protocols for storage area networks and computer architectures such as Parallel Sysplex are presented, including their fiber bandwidth requirements. We then describe two commercially available DWDM solutions, a first generation 10 channel system and a recently announced next generation 32 channel system. Technical requirements, network management and security, fault tolerant network designs, new network topologies enabled by DWDM, and the role of time division multiplexing in the network are all discussed. Finally, we present a description of testing conducted on these networks and future directions for this technology.

  2. Breeding and Genetics Symposium: really big data: processing and analysis of very large data sets.

    PubMed

    Cole, J B; Newman, S; Foertter, F; Aguilar, I; Coffey, M

    2012-03-01

    Modern animal breeding data sets are large and getting larger, due in part to recent availability of high-density SNP arrays and cheap sequencing technology. High-performance computing methods for efficient data warehousing and analysis are under development. Financial and security considerations are important when using shared clusters. Sound software engineering practices are needed, and it is better to use existing solutions when possible. Storage requirements for genotypes are modest, although full-sequence data will require greater storage capacity. Storage requirements for intermediate and results files for genetic evaluations are much greater, particularly when multiple runs must be stored for research and validation studies. The greatest gains in accuracy from genomic selection have been realized for traits of low heritability, and there is increasing interest in new health and management traits. The collection of sufficient phenotypes to produce accurate evaluations may take many years, and high-reliability proofs for older bulls are needed to estimate marker effects. Data mining algorithms applied to large data sets may help identify unexpected relationships in the data, and improved visualization tools will provide insights. Genomic selection using large data requires a lot of computing power, particularly when large fractions of the population are genotyped. Theoretical improvements have made possible the inversion of large numerator relationship matrices, permitted the solving of large systems of equations, and produced fast algorithms for variance component estimation. Recent work shows that single-step approaches combining BLUP with a genomic relationship (G) matrix have similar computational requirements to traditional BLUP, and the limiting factor is the construction and inversion of G for many genotypes. A naïve algorithm for creating G for 14,000 individuals required almost 24 h to run, but custom libraries and parallel computing reduced that to 15 m. Large data sets also create challenges for the delivery of genetic evaluations that must be overcome in a way that does not disrupt the transition from conventional to genomic evaluations. Processing time is important, especially as real-time systems for on-farm decisions are developed. The ultimate value of these systems is to decrease time-to-results in research, increase accuracy in genomic evaluations, and accelerate rates of genetic improvement.

  3. Online mass storage system detailed requirements document

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The requirements for an online high density magnetic tape data storage system that can be implemented in a multipurpose, multihost environment is set forth. The objective of the mass storage system is to provide a facility for the compact storage of large quantities of data and to make this data accessible to computer systems with minimum operator handling. The results of a market survey and analysis of candidate vendor who presently market high density tape data storage systems are included.

  4. Adaptive parametric model order reduction technique for optimization of vibro-acoustic models: Application to hearing aid design

    NASA Astrophysics Data System (ADS)

    Creixell-Mediante, Ester; Jensen, Jakob S.; Naets, Frank; Brunskog, Jonas; Larsen, Martin

    2018-06-01

    Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we present an adaptive pMOR technique where the construction of the projection basis is embedded in the optimization process and requires fewer full system analyses, while the accuracy of the reduced system is monitored by a cheap error indicator. The performance of the proposed method is evaluated for a 4-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system.

  5. A hierarchical modeling methodology for the definition and selection of requirements

    NASA Astrophysics Data System (ADS)

    Dufresne, Stephane

    This dissertation describes the development of a requirements analysis methodology that takes into account the concept of operations and the hierarchical decomposition of aerospace systems. At the core of the methodology, the Analytic Network Process (ANP) is used to ensure the traceability between the qualitative and quantitative information present in the hierarchical model. The proposed methodology is implemented to the requirements definition of a hurricane tracker Unmanned Aerial Vehicle. Three research objectives are identified in this work; (1) improve the requirements mapping process by matching the stakeholder expectations with the concept of operations, systems and available resources; (2) reduce the epistemic uncertainty surrounding the requirements and requirements mapping; and (3) improve the requirements down-selection process by taking into account the level of importance of the criteria and the available resources. Several challenges are associated with the identification and definition of requirements. The complexity of the system implies that a large number of requirements are needed to define the systems. These requirements are defined early in the conceptual design, where the level of knowledge is relatively low and the level of uncertainty is large. The proposed methodology intends to increase the level of knowledge and reduce the level of uncertainty by guiding the design team through a structured process. To address these challenges, a new methodology is created to flow-down the requirements from the stakeholder expectations to the systems alternatives. A taxonomy of requirements is created to classify the information gathered during the problem definition. Subsequently, the operational and systems functions and measures of effectiveness are integrated to a hierarchical model to allow the traceability of the information. Monte Carlo methods are used to evaluate the variations of the hierarchical model elements and consequently reduce the epistemic uncertainty. The proposed methodology is applied to the design of a hurricane tracker Unmanned Aerial Vehicles to demonstrate the origin and impact of requirements on the concept of operations and systems alternatives. This research demonstrates that the hierarchical modeling methodology provides a traceable flow-down of the requirements from the problem definition to the systems alternatives phases of conceptual design.

  6. Large Deployable Reflector Science and Technology Workshop. Volume 3: Systems and Technology Assessment

    NASA Technical Reports Server (NTRS)

    Leidich, C. A. (Editor); Pittman, R. B. (Editor)

    1984-01-01

    The results of five technology panels which convened to discuss the Large Deployable Reflector (LDR) are presented. The proposed LDR is a large, ambient-temperature, far infrared/submillimeter telescope designed for space. Panel topics included optics, materials and structures, sensing and control, science instruments, and systems and missions. The telescope requirements, the estimated technology levels, and the areas in which the generic technology work has to be augmented are enumerated.

  7. Interactive computer graphics and its role in control system design of large space structures

    NASA Technical Reports Server (NTRS)

    Reddy, A. S. S. R.

    1985-01-01

    This paper attempts to show the relevance of interactive computer graphics in the design of control systems to maintain attitude and shape of large space structures to accomplish the required mission objectives. The typical phases of control system design, starting from the physical model such as modeling the dynamics, modal analysis, and control system design methodology are reviewed and the need of the interactive computer graphics is demonstrated. Typical constituent parts of large space structures such as free-free beams and free-free plates are used to demonstrate the complexity of the control system design and the effectiveness of the interactive computer graphics.

  8. Design of General-purpose Industrial signal acquisition system in a large scientific device

    NASA Astrophysics Data System (ADS)

    Ren, Bin; Yang, Lei

    2018-02-01

    In order to measure the industrial signal of a large scientific device experiment, a set of industrial data general-purpose acquisition system has been designed. It can collect 4~20mA current signal and 0~10V voltage signal. Through the practical experiments, it shows that the system is flexible, reliable, convenient and economical, and the system has characters of high definition and strong anti-interference ability. Thus, the system fully meets the design requirements..

  9. Control system design for the large space systems technology reference platform

    NASA Technical Reports Server (NTRS)

    Edmunds, R. S.

    1982-01-01

    Structural models and classical frequency domain control system designs were developed for the large space systems technology (LSST) reference platform which consists of a central bus structure, solar panels, and platform arms on which a variety of experiments may be mounted. It is shown that operation of multiple independently articulated payloads on a single platform presents major problems when subarc second pointing stability is required. Experiment compatibility will be an important operational consideration for systems of this type.

  10. Large Advanced Space Systems (LASS) computer-aided design program additions

    NASA Technical Reports Server (NTRS)

    Farrell, C. E.

    1982-01-01

    The LSS preliminary and conceptual design requires extensive iteractive analysis because of the effects of structural, thermal, and control intercoupling. A computer aided design program that will permit integrating and interfacing of required large space system (LSS) analyses is discussed. The primary objective of this program is the implementation of modeling techniques and analysis algorithms that permit interactive design and tradeoff studies of LSS concepts. Eight software modules were added to the program. The existing rigid body controls module was modified to include solar pressure effects. The new model generator modules and appendage synthesizer module are integrated (interfaced) to permit interactive definition and generation of LSS concepts. The mass properties module permits interactive specification of discrete masses and their locations. The other modules permit interactive analysis of orbital transfer requirements, antenna primary beam n, and attitude control requirements.

  11. Low-authority control synthesis for large space structures

    NASA Technical Reports Server (NTRS)

    Aubrun, J. N.; Margulies, G.

    1982-01-01

    The control of vibrations of large space structures by distributed sensors and actuators is studied. A procedure is developed for calculating the feedback loop gains required to achieve specified amounts of damping. For moderate damping (Low Authority Control) the procedure is purely algebraic, but it can be applied iteratively when larger amounts of damping are required and is generalized for arbitrary time invariant systems.

  12. Monitoring and control requirement definition study for Dispersed Storage and Generation (DSG), volume 1

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Twenty-four functional requirements were prepared under six categories and serve to indicate how to integrate dispersed storage generation (DSG) systems with the distribution and other portions of the electric utility system. Results indicate that there are no fundamental technical obstacles to prevent the connection of dispersed storage and generation to the distribution system. However, a communication system of some sophistication is required to integrate the distribution system and the dispersed generation sources for effective control. The large-size span of generators from 10 KW to 30 MW means that a variety of remote monitoring and control may be required. Increased effort is required to develop demonstration equipment to perform the DSG monitoring and control functions and to acquire experience with this equipment in the utility distribution environment.

  13. The NASA Space Launch System Program Systems Engineering Approach for Affordability

    NASA Technical Reports Server (NTRS)

    Hutt, John J.; Whitehead, Josh; Hanson, John

    2017-01-01

    The National Aeronautics and Space Administration is currently developing the Space Launch System to provide the United States with a capability to launch large Payloads into Low Earth orbit and deep space. One of the development tenets of the SLS Program is affordability. One initiative to enhance affordability is the SLS approach to requirements definition, verification and system certification. The key aspects of this initiative include: 1) Minimizing the number of requirements, 2) Elimination of explicit verification requirements, 3) Use of certified models of subsystem capability in lieu of requirements when appropriate and 4) Certification of capability beyond minimum required capability. Implementation of each aspect is described and compared to a "typical" systems engineering implementation, including a discussion of relative risk. Examples of each implementation within the SLS Program are provided.

  14. Solar Hot Water for an Industrial Laundry--Fresno, California

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Final report describes an integrated wastewater-heat recovery system and solar preheating system to supply part of hot-water requirements of an industrial laundry. Large retrofit solar-water-heating system uses lightweight collectors.

  15. Sensor Selection and Optimization for Health Assessment of Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Kopasakis, George; Santi, Louis M.; Sowers, Thomas S.; Chicatelli, Amy

    2007-01-01

    Aerospace systems are developed similarly to other large-scale systems through a series of reviews, where designs are modified as system requirements are refined. For space-based systems few are built and placed into service. These research vehicles have limited historical experience to draw from and formidable reliability and safety requirements, due to the remote and severe environment of space. Aeronautical systems have similar reliability and safety requirements, and while these systems may have historical information to access, commercial and military systems require longevity under a range of operational conditions and applied loads. Historically, the design of aerospace systems, particularly the selection of sensors, is based on the requirements for control and performance rather than on health assessment needs. Furthermore, the safety and reliability requirements are met through sensor suite augmentation in an ad hoc, heuristic manner, rather than any systematic approach. A review of the current sensor selection practice within and outside of the aerospace community was conducted and a sensor selection architecture is proposed that will provide a justifiable, dependable sensor suite to address system health assessment requirements.

  16. Sensor Selection and Optimization for Health Assessment of Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Kopasakis, George; Santi, Louis M.; Sowers, Thomas S.; Chicatelli, Amy

    2008-01-01

    Aerospace systems are developed similarly to other large-scale systems through a series of reviews, where designs are modified as system requirements are refined. For space-based systems few are built and placed into service these research vehicles have limited historical experience to draw from and formidable reliability and safety requirements, due to the remote and severe environment of space. Aeronautical systems have similar reliability and safety requirements, and while these systems may have historical information to access, commercial and military systems require longevity under a range of operational conditions and applied loads. Historically, the design of aerospace systems, particularly the selection of sensors, is based on the requirements for control and performance rather than on health assessment needs. Furthermore, the safety and reliability requirements are met through sensor suite augmentation in an ad hoc, heuristic manner, rather than any systematic approach. A review of the current sensor selection practice within and outside of the aerospace community was conducted and a sensor selection architecture is proposed that will provide a justifiable, defendable sensor suite to address system health assessment requirements.

  17. Deployable antenna phase A study

    NASA Technical Reports Server (NTRS)

    Schultz, J.; Bernstein, J.; Fischer, G.; Jacobson, G.; Kadar, I.; Marshall, R.; Pflugel, G.; Valentine, J.

    1979-01-01

    Applications for large deployable antennas were re-examined, flight demonstration objectives were defined, the flight article (antenna) was preliminarily designed, and the flight program and ground development program, including the support equipment, were defined for a proposed space transportation system flight experiment to demonstrate a large (50 to 200 meter) deployable antenna system. Tasks described include: (1) performance requirements analysis; (2) system design and definition; (3) orbital operations analysis; and (4) programmatic analysis.

  18. NASA's Information Power Grid: Large Scale Distributed Computing and Data Management

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)

    2001-01-01

    Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.

  19. An Optimization Code for Nonlinear Transient Problems of a Large Scale Multidisciplinary Mathematical Model

    NASA Astrophysics Data System (ADS)

    Takasaki, Koichi

    This paper presents a program for the multidisciplinary optimization and identification problem of the nonlinear model of large aerospace vehicle structures. The program constructs the global matrix of the dynamic system in the time direction by the p-version finite element method (pFEM), and the basic matrix for each pFEM node in the time direction is described by a sparse matrix similarly to the static finite element problem. The algorithm used by the program does not require the Hessian matrix of the objective function and so has low memory requirements. It also has a relatively low computational cost, and is suited to parallel computation. The program was integrated as a solver module of the multidisciplinary analysis system CUMuLOUS (Computational Utility for Multidisciplinary Large scale Optimization of Undense System) which is under development by the Aerospace Research and Development Directorate (ARD) of the Japan Aerospace Exploration Agency (JAXA).

  20. Thermal/structural design verification strategies for large space structures

    NASA Technical Reports Server (NTRS)

    Benton, David

    1988-01-01

    Requirements for space structures of increasing size, complexity, and precision have engendered a search for thermal design verification methods that do not impose unreasonable costs, that fit within the capabilities of existing facilities, and that still adequately reduce technical risk. This requires a combination of analytical and testing methods. This requires two approaches. The first is to limit thermal testing to sub-elements of the total system only in a compact configuration (i.e., not fully deployed). The second approach is to use a simplified environment to correlate analytical models with test results. These models can then be used to predict flight performance. In practice, a combination of these approaches is needed to verify the thermal/structural design of future very large space systems.

  1. Millimeter radiometer system technology

    NASA Technical Reports Server (NTRS)

    Wilson, W. J.; Swanson, P. N.

    1989-01-01

    JPL has had a large amount of experience with spaceborne microwave/millimeter wave radiometers for remote sensing. All of the instruments use filled aperture antenna systems from 5 cm diameter for the microwave Sounder Units (MSU), 16 m for the microwave limb sounder (MLS) to 20 m for the large deployable reflector (LDR). The advantages of filled aperture antenna systems are presented. The requirements of the 10 m Geoplat antenna system, 10 m multified antenna, and the MLS are briefly discussed.

  2. Millimeter radiometer system technology

    NASA Astrophysics Data System (ADS)

    Wilson, W. J.; Swanson, P. N.

    1989-07-01

    JPL has had a large amount of experience with spaceborne microwave/millimeter wave radiometers for remote sensing. All of the instruments use filled aperture antenna systems from 5 cm diameter for the microwave Sounder Units (MSU), 16 m for the microwave limb sounder (MLS) to 20 m for the large deployable reflector (LDR). The advantages of filled aperture antenna systems are presented. The requirements of the 10 m Geoplat antenna system, 10 m multified antenna, and the MLS are briefly discussed.

  3. Development INTERDATA 8/32 computer system

    NASA Technical Reports Server (NTRS)

    Sonett, C. P.

    1983-01-01

    The capabilities of the Interdata 8/32 minicomputer were examined regarding data and word processing, editing, retrieval, and budgeting as well as data management demands of the user groups in the network. Based on four projected needs: (1) a hands on (open shop) computer for data analysis with large core and disc capability; (2) the expected requirements of the NASA data networks; (3) the need for intermittent large core capacity for theoretical modeling; (4) the ability to access data rapidly either directly from tape or from core onto hard copy, the system proved useful and adequate for the planned requirements.

  4. Advanced UVOIR Mirror Technology Development for Very Large Space Telescopes

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip

    2011-01-01

    Objective of this work is to define and initiate a long-term program to mature six inter-linked critical technologies for future UVOIR space telescope mirrors to TRL6 by 2018 so that a viable flight mission can be proposed to the 2020 Decadal Review. (1) Large-Aperture, Low Areal Density, High Stiffness Mirrors: 4 to 8 m monolithic & 8 to 16 m segmented primary mirrors require larger, thicker, stiffer substrates. (2) Support System:Large-aperture mirrors require large support systems to ensure that they survive launch and deploy on orbit in a stress-free and undistorted shape. (3) Mid/High Spatial Frequency Figure Error:A very smooth mirror is critical for producing a high-quality point spread function (PSF) for high-contrast imaging. (4) Segment Edges:Edges impact PSF for high-contrast imaging applications, contributes to stray light noise, and affects the total collecting aperture. (5) Segment-to-Segment Gap Phasing:Segment phasing is critical for producing a high-quality temporally stable PSF. (6) Integrated Model Validation:On-orbit performance is determined by mechanical and thermal stability. Future systems require validated performance models. We are pursuing multiple design paths give the science community the option to enable either a future monolithic or segmented space telescope.

  5. Wavefront control of large optical systems

    NASA Technical Reports Server (NTRS)

    Meinel, Aden B.; Meinel, Marjorie P.; Breckinridge, J. B.

    1990-01-01

    Several levels of wavefront control are necessary for the optimum performance of very large telescopes, especially segmented ones like the Large Deployable Reflector. In general, the major contributors to wavefront error are the segments of the large primary mirror. Wavefront control at the largest optical surface may not be the optimum choice because of the mass and inaccessibility of the elements of this surface that require upgrading. The concept of two-stage optics was developed to permit a poor wavefront from the large optics to be upgraded by means of a wavefront corrector at a small exit pupil of the system.

  6. Key ingredients needed when building large data processing systems for scientists

    NASA Technical Reports Server (NTRS)

    Miller, K. C.

    2002-01-01

    Why is building a large science software system so painful? Weren't teams of software engineers supposed to make life easier for scientists? Does it sometimes feel as if it would be easier to write the million lines of code in Fortran 77 yourself? The cause of this dissatisfaction is that many of the needs of the science customer remain hidden in discussions with software engineers until after a system has already been built. In fact, many of the hidden needs of the science customer conflict with stated needs and are therefore very difficult to meet unless they are addressed from the outset in a system's architectural requirements. What's missing is the consideration of a small set of key software properties in initial agreements about the requirements, the design and the cost of the system.

  7. Extended duration Orbiter life support definition

    NASA Technical Reports Server (NTRS)

    Kleiner, G. N.; Thompson, C. D.

    1978-01-01

    Extending the baseline seven-day Orbiter mission to 30 days or longer and operating with a solar power module as the primary source for electrical power requires changes to the existing environmental control and life support (ECLS) system. The existing ECLS system imposes penalties on longer missions which limit the Orbiter capabilities and changes are required to enhance overall mission objectives. Some of these penalties are: large quantities of expendables, the need to dump or store large quantities of waste material, the need to schedule fuel cell operation, and a high landing weight penalty. This paper presents the study ground rules and examines the limitations of the present ECLS system against Extended Duration Orbiter mission requirements. Alternate methods of accomplishing ECLS functions for the Extended Duration Orbiter are discussed. The overall impact of integrating these options into the Orbiter are evaluated and significant Orbiter weight and volume savings with the recommended approaches are described.

  8. Rucio, the next-generation Data Management system in ATLAS

    NASA Astrophysics Data System (ADS)

    Serfon, C.; Barisits, M.; Beermann, T.; Garonne, V.; Goossens, L.; Lassnig, M.; Nairz, A.; Vigne, R.; ATLAS Collaboration

    2016-04-01

    Rucio is the next-generation of Distributed Data Management (DDM) system benefiting from recent advances in cloud and ;Big Data; computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quixote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 160 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio addresses these issues by relying on new technologies to ensure system scalability, cover new user requirements and employ new automation framework to reduce operational overheads. This paper shows the key concepts of Rucio, details the Rucio design, and the technology it employs, the tests that were conducted to validate it and finally describes the migration steps that were conducted to move from DQ2 to Rucio.

  9. Multi-Modal Traveler Information System - Gateway Functional Requirements

    DOT National Transportation Integrated Search

    1997-11-17

    The Multi-Modal Traveler Information System (MMTIS) project involves a large number of Intelligent Transportation System (ITS) related tasks. It involves research of all ITS initiatives in the Gary-Chicago-Milwaukee (GCM) Corridor which are currently...

  10. Multi-Modal Traveler Information System - Gateway Interface Control Requirements

    DOT National Transportation Integrated Search

    1997-10-30

    The Multi-Modal Traveler Information System (MMTIS) project involves a large number of Intelligent Transportation System (ITS) related tasks. It involves research of all ITS initiatives in the Gary-Chicago-Milwaukee (GCM) Corridor which are currently...

  11. An improved filter elution and cell culture assay procedure for evaluating public groundwater systems for culturable enteroviruses.

    PubMed

    Dahling, Daniel R

    2002-01-01

    Large-scale virus studies of groundwater systems require practical and sensitive procedures for both sample processing and viral assay. Filter adsorption-elution procedures have traditionally been used to process large-volume water samples for viruses. In this study, five filter elution procedures using cartridge filters were evaluated for their effectiveness in processing samples. Of the five procedures tested, the third method, which incorporated two separate beef extract elutions (one being an overnight filter immersion in beef extract), recovered 95% of seeded poliovirus compared with recoveries of 36 to 70% for the other methods. For viral enumeration, an expanded roller bottle quantal assay was evaluated using seeded poliovirus. This cytopathic-based method was considerably more sensitive than the standard plaque assay method. The roller bottle system was more economical than the plaque assay for the evaluation of comparable samples. Using roller bottles required less time and manipulation than the plaque procedure and greatly facilitated the examination of large numbers of samples. The combination of the improved filter elution procedure and the roller bottle assay for viral analysis makes large-scale virus studies of groundwater systems practical. This procedure was subsequently field tested during a groundwater study in which large-volume samples (exceeding 800 L) were processed through the filters.

  12. Large-screen display technology assessment for military applications

    NASA Astrophysics Data System (ADS)

    Blaha, Richard J.

    1990-08-01

    Full-color, large screen display systems can enhance military applications that require group presentation, coordinated decisions, or interaction between decision makers. The technology already plays an important role in operations centers, simulation facilities, conference rooms, and training centers. Some applications display situational, status, or briefing information, while others portray instructional material for procedural training or depict realistic panoramic scenes that are used in simulators. While each specific application requires unique values of luminance, resolution, response time, reliability, and the video interface, suitable performance can be achieved with available commercial large screen displays. Advances in the technology of large screen displays are driven by the commercial applications because the military applications do not provide the significant market share enjoyed by high definition television (HDTV), entertainment, advertisement, training, and industrial applications. This paper reviews the status of full-color, large screen display technologies and includes the performance and cost metrics of available systems. For this discussion, performance data is based upon either measurements made by our personnel or extractions from vendors' data sheets.

  13. Structural Similitude and Scaling Laws

    NASA Technical Reports Server (NTRS)

    Simitses, George J.

    1998-01-01

    Aircraft and spacecraft comprise the class of aerospace structures that require efficiency and wisdom in design, sophistication and accuracy in analysis and numerous and careful experimental evaluations of components and prototype, in order to achieve the necessary system reliability, performance and safety. Preliminary and/or concept design entails the assemblage of system mission requirements, system expected performance and identification of components and their connections as well as of manufacturing and system assembly techniques. This is accomplished through experience based on previous similar designs, and through the possible use of models to simulate the entire system characteristics. Detail design is heavily dependent on information and concepts derived from the previous steps. This information identifies critical design areas which need sophisticated analyses, and design and redesign procedures to achieve the expected component performance. This step may require several independent analysis models, which, in many instances, require component testing. The last step in the design process, before going to production, is the verification of the design. This step necessitates the production of large components and prototypes in order to test component and system analytical predictions and verify strength and performance requirements under the worst loading conditions that the system is expected to encounter in service. Clearly then, full-scale testing is in many cases necessary and always very expensive. In the aircraft industry, in addition to full-scale tests, certification and safety necessitate large component static and dynamic testing. Such tests are extremely difficult, time consuming and definitely absolutely necessary. Clearly, one should not expect that prototype testing will be totally eliminated in the aircraft industry. It is hoped, though, that we can reduce full-scale testing to a minimum. Full-scale large component testing is necessary in other industries as well, Ship building, automobile and railway car construction all rely heavily on testing. Regardless of the application, a scaled-down (by a large factor) model (scale model) which closely represents the structural behavior of the full-scale system (prototype) can prove to be an extremely beneficial tool. This possible development must be based on the existence of certain structural parameters that control the behavior of a structural system when acted upon by static and/or dynamic loads. If such structural parameters exist, a scaled-down replica can be built, which will duplicate the response of the full-scale system. The two systems are then said to be structurally similar. The term, then, that best describes this similarity is structural similitude. Similarity of systems requires that the relevant system parameters be identical and these systems be governed by a unique set of characteristic equations. Thus, if a relation or equation of variables is written for a system, it is valid for all systems which are similar to it. Each variable in a model is proportional to the corresponding variable of the prototype. This ratio, which plays an essential role in predicting the relationship between the model and its prototype, is called the scale factor.

  14. Development of distortion measurement system for large deployable antenna via photogrammetry in vacuum and cryogenic environment

    NASA Astrophysics Data System (ADS)

    Zhang, Pengsong; Jiang, Shanping; Yang, Linhua; Zhang, Bolun

    2018-01-01

    In order to meet the requirement of high precision thermal distortion measurement foraΦ4.2m deployable mesh antenna of satellite in vacuum and cryogenic environment, based on Digital Close-range Photogrammetry and Space Environment Test Technology of Spacecraft, a large scale antenna distortion measurement system under vacuum and cryogenic environment is developed in this paper. The antenna Distortion measurement system (ADMS) is the first domestic independently developed thermal distortion measurement system for large antenna, which has successfully solved non-contact high precision distortion measurement problem in large spacecraft structure under vacuum and cryogenic environment. The measurement accuracy of ADMS is better than 50 μm/5m, which has reached international advanced level. The experimental results show that the measurement system has great advantages in large structural measurement of spacecrafts, and also has broad application prospects in space or other related fields.

  15. Solar simulator for concentrator photovoltaic systems.

    PubMed

    Domínguez, César; Antón, Ignacio; Sala, Gabriel

    2008-09-15

    A solar simulator for measuring performance of large area concentrator photovoltaic (CPV) modules is presented. Its illumination system is based on a Xenon flash light and a large area collimator mirror, which simulates natural sun light. Quality requirements imposed by the CPV systems have been characterized: irradiance level and uniformity at the receiver, light collimation and spectral distribution. The simulator allows indoor fast and cost-effective performance characterization and classification of CPV systems at the production line as well as module rating carried out by laboratories.

  16. Systems of Inhomogeneous Linear Equations

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Many problems in physics and especially computational physics involve systems of linear equations which arise e.g. from linearization of a general nonlinear problem or from discretization of differential equations. If the dimension of the system is not too large standard methods like Gaussian elimination or QR decomposition are sufficient. Systems with a tridiagonal matrix are important for cubic spline interpolation and numerical second derivatives. They can be solved very efficiently with a specialized Gaussian elimination method. Practical applications often involve very large dimensions and require iterative methods. Convergence of Jacobi and Gauss-Seidel methods is slow and can be improved by relaxation or over-relaxation. An alternative for large systems is the method of conjugate gradients.

  17. Space Fed Subarray Synthesis Using Displaced Feed Location

    NASA Astrophysics Data System (ADS)

    Mailloux, Robert J.

    2002-01-01

    Wideband space-fed subarray systems are often proposed for large airborne or spaceborne scanning array applications. These systems allow the introduction of time delay devices at the subarray input terminals while using phase shifters in the array face. This can sometimes reduce the number of time delayed controls by an order of magnitude or more. The implementation of this technology has been slowed because the feed network, usually a Rotman Lens or Butler Matrix, is bulky, heavy and often has significant RF loss. In addition, the large lens aperture is necessarily filled with phase shifters, and so it introduces further loss, weight, and perhaps unacceptable phase shifter control power. These systems are currently viewed with increased interest because combination of low loss, low power MEMS phase shifters in the main aperture and solid state T/R modules in the feed might lead to large scanning arrays with much higher efficiency then previously realizable. Unfortunately, the conventional system design imposes an extremely large dynamic range requirement when used in the transmit mode, and requires very high output power from the T/R modules. This paper presents one possible solution to this problem using a modified feed geometry.

  18. Packed Bed Bioreactor for the Isolation and Expansion of Placental-Derived Mesenchymal Stromal Cells

    PubMed Central

    Osiecki, Michael J.; Michl, Thomas D.; Kul Babur, Betul; Kabiri, Mahboubeh; Atkinson, Kerry; Lott, William B.; Griesser, Hans J.; Doran, Michael R.

    2015-01-01

    Large numbers of Mesenchymal stem/stromal cells (MSCs) are required for clinical relevant doses to treat a number of diseases. To economically manufacture these MSCs, an automated bioreactor system will be required. Herein we describe the development of a scalable closed-system, packed bed bioreactor suitable for large-scale MSCs expansion. The packed bed was formed from fused polystyrene pellets that were air plasma treated to endow them with a surface chemistry similar to traditional tissue culture plastic. The packed bed was encased within a gas permeable shell to decouple the medium nutrient supply and gas exchange. This enabled a significant reduction in medium flow rates, thus reducing shear and even facilitating single pass medium exchange. The system was optimised in a small-scale bioreactor format (160 cm2) with murine-derived green fluorescent protein-expressing MSCs, and then scaled-up to a 2800 cm2 format. We demonstrated that placental derived MSCs could be isolated directly within the bioreactor and subsequently expanded. Our results demonstrate that the closed system large-scale packed bed bioreactor is an effective and scalable tool for large-scale isolation and expansion of MSCs. PMID:26660475

  19. Packed Bed Bioreactor for the Isolation and Expansion of Placental-Derived Mesenchymal Stromal Cells.

    PubMed

    Osiecki, Michael J; Michl, Thomas D; Kul Babur, Betul; Kabiri, Mahboubeh; Atkinson, Kerry; Lott, William B; Griesser, Hans J; Doran, Michael R

    2015-01-01

    Large numbers of Mesenchymal stem/stromal cells (MSCs) are required for clinical relevant doses to treat a number of diseases. To economically manufacture these MSCs, an automated bioreactor system will be required. Herein we describe the development of a scalable closed-system, packed bed bioreactor suitable for large-scale MSCs expansion. The packed bed was formed from fused polystyrene pellets that were air plasma treated to endow them with a surface chemistry similar to traditional tissue culture plastic. The packed bed was encased within a gas permeable shell to decouple the medium nutrient supply and gas exchange. This enabled a significant reduction in medium flow rates, thus reducing shear and even facilitating single pass medium exchange. The system was optimised in a small-scale bioreactor format (160 cm2) with murine-derived green fluorescent protein-expressing MSCs, and then scaled-up to a 2800 cm2 format. We demonstrated that placental derived MSCs could be isolated directly within the bioreactor and subsequently expanded. Our results demonstrate that the closed system large-scale packed bed bioreactor is an effective and scalable tool for large-scale isolation and expansion of MSCs.

  20. Space fabrication demonstration system: Executive summary. [for large space structures

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The results of analysis and tests conducted to define the basic 1-m beam configuration required, and the design, development, fabrication, and verification tests of the machine required to automatically produce these beams are presented.

  1. Multi-Modal Traveler Information System - GCM Corridor Architecture Interface Control Requirements

    DOT National Transportation Integrated Search

    1997-10-31

    The Multi-Modal Traveler Information System (MMTIS) project involves a large number of Intelligent Transportation System (ITS) related tasks. It involves research of all ITS initiatives in the Gary-Chicago-Milwaukee (GCM) Corridor which are currently...

  2. Multi-Modal Traveler Information System - GCM Corridor Architecture Functional Requirements

    DOT National Transportation Integrated Search

    1997-11-17

    The Multi-Modal Traveler Information System (MMTIS) project involves a large number of Intelligent Transportation System (ITS) related tasks. It involves research of all ITS initiatives in the Gary-Chicago-Milwaukee (GCM) Corridor which are currently...

  3. A paradoxical improvement of misreaching in optic ataxia: new evidence for two separate neural systems for visual localization.

    PubMed

    Milner, A D; Paulignan, Y; Dijkerman, H C; Michel, F; Jeannerod, M

    1999-11-07

    We tested a patient (A. T.) with bilateral brain damage to the parietal lobes, whose resulting 'optic ataxia' causes her to make large pointing errors when asked to locate single light emitting diodes presented in her visual field. We report here that, unlike normal individuals, A. T.'s pointing accuracy improved when she was required to wait for 5 s before responding. This counter-intuitive result is interpreted as reflecting the very brief time-scale on which visuomotor control systems in the superior parietal lobe operate. When an immediate response was required, A. T.'s damaged visuomotor system caused her to make large errors; but when a delay was required, a different, more flexible, visuospatial coding system--presumably relatively intact in her brain--came into play, resulting in much more accurate responses. The data are consistent with a dual processing theory whereby motor responses made directly to visual stimuli are guided by a dedicated system in the superior parietal and premotor cortices, while responses to remembered stimuli depend on perceptual processing and may thus crucially involve processing within the temporal neocortex.

  4. Antenna Electronics Concept for the Next-Generation Very Large Array

    NASA Astrophysics Data System (ADS)

    Beasley, Anthony J.; Jackson, Jim; Selina, Robert

    2017-01-01

    The National Radio Astronomy Observatory (NRAO), in collaboration with its international partners, completed two major projects over the past decade: the sensitivity upgrade for the Karl Jansky Very Large Array (VLA) and the construction of the Atacama Large Millimeter/Sub-Millimeter Array (ALMA). The NRAO is now considering the scientific potential and technical feasibility of a next-generation VLA (ngVLA) with an emphasis on thermal imaging at milli-arcsecond resolution. The preliminary goals for the ngVLA are to increase both the system sensitivity and angular resolution of the VLA tenfold and to cover a frequency range of 1.2-116 GHz.A number of key technical challenges have been identified for the project. These include cost-effective antenna manufacturing (in the hundreds), suitable wide-band feed and receiver designs, broad-band data transmission, and large-N correlators. Minimizing the overall operations cost is also a fundamental design requirement.The designs of the antenna electronics, reference distribution system, and data transmission system are anticipated to be major construction and operations cost drivers for the facility. The electronics must achieve a high level of performance, while maintaining low operation and maintenance costs and a high level of reliability. Additionally, due to the uncertainty in the feasibility of wideband receivers, advancements in digitizer technology, and budget constraints, the hardware system architecture should be scalable to the number of receiver bands and the speed and resolution of available digitizers.Here, we present the projected performance requirements of the ngVLA, a proposed block diagram for the instrument’s electronics systems, parameter tradeoffs within the system specifications, and areas of technical risk where technical advances may be required for successful production and installation.

  5. The Requirements Generation System: A tool for managing mission requirements

    NASA Technical Reports Server (NTRS)

    Sheppard, Sylvia B.

    1994-01-01

    Historically, NASA's cost for developing mission requirements has been a significant part of a mission's budget. Large amounts of time have been allocated in mission schedules for the development and review of requirements by the many groups who are associated with a mission. Additionally, tracing requirements from a current document to a parent document has been time-consuming and costly. The Requirements Generation System (RGS) is a computer-supported cooperative-work tool that assists mission developers in the online creation, review, editing, tracing, and approval of mission requirements as well as in the production of requirements documents. This paper describes the RGS and discusses some lessons learned during its development.

  6. Implementation of a large-scale hospital information infrastructure for multi-unit health-care services.

    PubMed

    Yoo, Sun K; Kim, Dong Keun; Kim, Jung C; Park, Youn Jung; Chang, Byung Chul

    2008-01-01

    With the increase in demand for high quality medical services, the need for an innovative hospital information system has become essential. An improved system has been implemented in all hospital units of the Yonsei University Health System. Interoperability between multi-units required appropriate hardware infrastructure and software architecture. This large-scale hospital information system encompassed PACS (Picture Archiving and Communications Systems), EMR (Electronic Medical Records) and ERP (Enterprise Resource Planning). It involved two tertiary hospitals and 50 community hospitals. The monthly data production rate by the integrated hospital information system is about 1.8 TByte and the total quantity of data produced so far is about 60 TByte. Large scale information exchange and sharing will be particularly useful for telemedicine applications.

  7. Adaptive optical system for writing large holographic optical elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tyutchev, M.V.; Kalyashov, E.V.; Pavlov, A.P.

    1994-11-01

    This paper formulates the requirements imposed on systems for correcting the phase-difference distribution of recording waves over the field of a large-diameter photographic plate ({le}1.5 m) when writing holographic optical elements (HOEs). A technique is proposed for writing large HOEs, based on the use of an adaptive phase-correction optical system of the first type, controlled by the self-diffraction signal from a latent image. The technique is implemented by writing HOEs on photographic plates with an effective diameter of 0.7 m on As{sub 2}S{sub 3} layers. 13 refs., 4 figs.

  8. Micro-precision control/structure interaction technology for large optical space systems

    NASA Technical Reports Server (NTRS)

    Sirlin, Samuel W.; Laskin, Robert A.

    1993-01-01

    The CSI program at JPL is chartered to develop the structures and control technology needed for sub-micron level stabilization of future optical space systems. The extreme dimensional stability required for such systems derives from the need to maintain the alignment and figure of critical optical elements to a small fraction (typically 1/20th to 1/50th) of the wavelength of detected radiation. The wavelength is about 0.5 micron for visible light and 0.1 micron for ultra-violet light. This lambda/50 requirement is common to a broad class of optical systems including filled aperture telescopes (with monolithic or segmented primary mirrors), sparse aperture telescopes, and optical interferometers. The challenge for CSI arises when such systems become large, with spatially distributed optical elements mounted on a lightweight, flexible structure. In order to better understand the requirements for micro-precision CSI technology, a representative future optical system was identified and developed as an analytical testbed for CSI concepts and approaches. An optical interferometer was selected as a stressing example of the relevant mission class. The system that emerged was termed the Focus Mission Interferometer (FMI). This paper will describe the multi-layer control architecture used to address the FMI's nanometer level stabilization requirements. In addition the paper will discuss on-going and planned experimental work aimed at demonstrating that multi-layer CSI can work in practice in the relevant performance regime.

  9. Study of multi-functional precision optical measuring system for large scale equipment

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Lao, Dabao; Zhou, Weihu; Zhang, Wenying; Jiang, Xingjian; Wang, Yongxi

    2017-10-01

    The effective application of high performance measurement technology can greatly improve the large-scale equipment manufacturing ability. Therefore, the geometric parameters measurement, such as size, attitude and position, requires the measurement system with high precision, multi-function, portability and other characteristics. However, the existing measuring instruments, such as laser tracker, total station, photogrammetry system, mostly has single function, station moving and other shortcomings. Laser tracker needs to work with cooperative target, but it can hardly meet the requirement of measurement in extreme environment. Total station is mainly used for outdoor surveying and mapping, it is hard to achieve the demand of accuracy in industrial measurement. Photogrammetry system can achieve a wide range of multi-point measurement, but the measuring range is limited and need to repeatedly move station. The paper presents a non-contact opto-electronic measuring instrument, not only it can work by scanning the measurement path but also measuring the cooperative target by tracking measurement. The system is based on some key technologies, such as absolute distance measurement, two-dimensional angle measurement, automatically target recognition and accurate aiming, precision control, assembly of complex mechanical system and multi-functional 3D visualization software. Among them, the absolute distance measurement module ensures measurement with high accuracy, and the twodimensional angle measuring module provides precision angle measurement. The system is suitable for the case of noncontact measurement of large-scale equipment, it can ensure the quality and performance of large-scale equipment throughout the process of manufacturing and improve the manufacturing ability of large-scale and high-end equipment.

  10. Advanced space system concepts and their orbital support needs (1980 - 2000). Volume 2: Final report

    NASA Technical Reports Server (NTRS)

    Bekey, I.; Mayer, H. L.; Wolfe, M. G.

    1976-01-01

    The results are presented of a study which identifies over 100 new and highly capable space systems for the 1980-2000 time period: civilian systems which could bring benefits to large numbers of average citizens in everyday life, much enhance the kinds and levels of public services, increase the economic motivation for industrial investment in space, expand scientific horizons; and, in the military area, systems which could materially alter current concepts of tactical and strategic engagements. The requirements for space transportation, orbital support, and technology for these systems are derived, and those requirements likely to be shared between NASA and the DoD in the time period identified. The high leverage technologies for the time period are identified as very large microwave antennas and optics, high energy power subsystems, high precision and high power lasers, microelectronic circuit complexes and data processors, mosaic solid state sensing devices, and long-life cryogenic refrigerators.

  11. Large space telescope engineering scale model optical design

    NASA Technical Reports Server (NTRS)

    Facey, T. A.

    1973-01-01

    The objective is to develop the detailed design and tolerance data for the LST engineering scale model optical system. This will enable MSFC to move forward to the optical element procurement phase and also to evaluate tolerances, manufacturing requirements, assembly/checkout procedures, reliability, operational complexity, stability requirements of the structure and thermal system, and the flexibility to change and grow.

  12. Large Deployable Reflector (LDR) system concept and technology definition study. Analysis of space station requirements for LDR

    NASA Astrophysics Data System (ADS)

    Agnew, Donald L.; Vinkey, Victor F.; Runge, Fritz C.

    1989-04-01

    A study was conducted to determine how the Large Deployable Reflector (LDR) might benefit from the use of the space station for assembly, checkout, deployment, servicing, refurbishment, and technology development. Requirements that must be met by the space station to supply benefits for a selected scenario are summarized. Quantitative and qualitative data are supplied. Space station requirements for LDR which may be utilized by other missions are identified. A technology development mission for LDR is outlined and requirements summarized. A preliminary experiment plan is included. Space Station Data Base SAA 0020 and TDM 2411 are updated.

  13. Large Deployable Reflector (LDR) system concept and technology definition study. Analysis of space station requirements for LDR

    NASA Technical Reports Server (NTRS)

    Agnew, Donald L.; Vinkey, Victor F.; Runge, Fritz C.

    1989-01-01

    A study was conducted to determine how the Large Deployable Reflector (LDR) might benefit from the use of the space station for assembly, checkout, deployment, servicing, refurbishment, and technology development. Requirements that must be met by the space station to supply benefits for a selected scenario are summarized. Quantitative and qualitative data are supplied. Space station requirements for LDR which may be utilized by other missions are identified. A technology development mission for LDR is outlined and requirements summarized. A preliminary experiment plan is included. Space Station Data Base SAA 0020 and TDM 2411 are updated.

  14. Economic and energetic analysis of capturing CO2 from ambient air

    PubMed Central

    House, Kurt Zenz; Baclig, Antonio C.; Ranjan, Manya; van Nierop, Ernst A.; Wilcox, Jennifer; Herzog, Howard J.

    2011-01-01

    Capturing carbon dioxide from the atmosphere (“air capture”) in an industrial process has been proposed as an option for stabilizing global CO2 concentrations. Published analyses suggest these air capture systems may cost a few hundred dollars per tonne of CO2, making it cost competitive with mainstream CO2 mitigation options like renewable energy, nuclear power, and carbon dioxide capture and storage from large CO2 emitting point sources. We investigate the thermodynamic efficiencies of commercial separation systems as well as trace gas removal systems to better understand and constrain the energy requirements and costs of these air capture systems. Our empirical analyses of operating commercial processes suggest that the energetic and financial costs of capturing CO2 from the air are likely to have been underestimated. Specifically, our analysis of existing gas separation systems suggests that, unless air capture significantly outperforms these systems, it is likely to require more than 400 kJ of work per mole of CO2, requiring it to be powered by CO2-neutral power sources in order to be CO2 negative. We estimate that total system costs of an air capture system will be on the order of $1,000 per tonne of CO2, based on experience with as-built large-scale trace gas removal systems. PMID:22143760

  15. Laser photovoltaic power system synergy for SEI applications

    NASA Technical Reports Server (NTRS)

    Landis, Geoffrey A.; Hickman, J. M.

    1991-01-01

    Solar arrays can provide reliable space power, but do not operate when there is no solar energy. Photovoltaic arrays can also convert laser energy with high efficiency. One proposal to reduce the required mass of energy storage required is to illuminate the photovoltaic arrays by a ground laser system. It is proposed to locate large lasers on cloud-free sites at one or more ground locations, and use large lenses or mirrors with adaptive optical correction to reduce the beam spread due to diffraction or atmospheric turbulence. During the eclipse periods or lunar night, the lasers illuminate the solar arrays to a level sufficient to provide operating power.

  16. Post-Attack Economic Stabilization Issues for Federal, State, and Local Governments

    DTIC Science & Technology

    1985-02-01

    workers being transfered from large urban areas to production facilities in areas of lower risk . In another case, rent control staff should be quickly...food supermarkets , which do not universally accept bank cards. 3 0 A requirement will still exist for a large number of credit cards. While there is some...separate system is required for rationing. For example, the increasingly popular automatic teller machine ( ATM ) debit card routinely accesses both a

  17. Facilitating access to information in large documents with an intelligent hypertext system

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie

    1993-01-01

    Retrieving specific information from large amounts of documentation is not an easy task. It could be facilitated if information relevant in the current problem solving context could be automatically supplied to the user. As a first step towards this goal, we have developed an intelligent hypertext system called CID (Computer Integrated Documentation) and tested it on the Space Station Freedom requirement documents. The CID system enables integration of various technical documents in a hypertext framework and includes an intelligent context-sensitive indexing and retrieval mechanism. This mechanism utilizes on-line user information requirements and relevance feedback either to reinforce current indexing in case of success or to generate new knowledge in case of failure. This allows the CID system to provide helpful responses, based on previous usage of the documentation, and to improve its performance over time.

  18. Controls, health assessment, and conditional monitoring for large, reusable, liquid rocket engines

    NASA Technical Reports Server (NTRS)

    Cikanek, H. A., III

    1986-01-01

    Past and future progress in the performance of control systems for large, liquid rocket engines typified such as current state-of-the-art, the Shuttle Main Engine (SSME), is discussed. Details of the first decade of efforts, which culminates in the F-1 and J-2 Saturn engines control systems, are traced, noting problem modes and improvements which were implemented to realize the SSME. Future control system designs, to accommodate the requirements of operation of engines for a heavy lift launch vehicle, an orbital transfer vehicle and the aerospace plane, are summarized. Generic design upgrades needed include an expanded range of fault detection, maintenance as-needed instead of as-scheduled, reduced human involvement in engine operations, and increased control of internal engine states. Current NASA technology development programs aimed at meeting the future control system requirements are described.

  19. Climate change induced transformations of agricultural systems: insights from a global model

    NASA Astrophysics Data System (ADS)

    Leclère, D.; Havlík, P.; Fuss, S.; Schmid, E.; Mosnier, A.; Walsh, B.; Valin, H.; Herrero, M.; Khabarov, N.; Obersteiner, M.

    2014-12-01

    Climate change might impact crop yields considerably and anticipated transformations of agricultural systems are needed in the coming decades to sustain affordable food provision. However, decision-making on transformational shifts in agricultural systems is plagued by uncertainties concerning the nature and geography of climate change, its impacts, and adequate responses. Locking agricultural systems into inadequate transformations costly to adjust is a significant risk and this acts as an incentive to delay action. It is crucial to gain insight into how much transformation is required from agricultural systems, how robust such strategies are, and how we can defuse the associated challenge for decision-making. While implementing a definition related to large changes in resource use into a global impact assessment modelling framework, we find transformational adaptations to be required of agricultural systems in most regions by 2050s in order to cope with climate change. However, these transformations widely differ across climate change scenarios: uncertainties in large-scale development of irrigation span in all continents from 2030s on, and affect two-thirds of regions by 2050s. Meanwhile, significant but uncertain reduction of major agricultural areas affects the Northern Hemisphere’s temperate latitudes, while increases to non-agricultural zones could be large but uncertain in one-third of regions. To help reducing the associated challenge for decision-making, we propose a methodology exploring which, when, where and why transformations could be required and uncertain, by means of scenario analysis.

  20. Space construction data base

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Construction of large systems in space is a technology requiring the development of construction methods to deploy, assemble, and fabricate the elements comprising such systems. A construction method is comprised of all essential functions and operations and related support equipment necessary to accomplish a specific construction task in a particular way. The data base objective is to provide to the designers of large space systems a compendium of the various space construction methods which could have application to their projects.

  1. System Verification Through Reliability, Availability, Maintainability (RAM) Analysis & Technology Readiness Levels (TRLs)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emmanuel Ohene Opare, Jr.; Charles V. Park

    The Next Generation Nuclear Plant (NGNP) Project, managed by the Idaho National Laboratory (INL), is authored by the Energy Policy Act of 2005, to research, develop, design, construct, and operate a prototype fourth generation nuclear reactor to meet the needs of the 21st Century. A section in this document proposes that the NGNP will provide heat for process heat applications. As with all large projects developing and deploying new technologies, the NGNP is expected to meet high performance and availability targets relative to current state of the art systems and technology. One requirement for the NGNP is to provide heatmore » for the generation of hydrogen for large scale productions and this process heat application is required to be at least 90% or more available relative to other technologies currently on the market. To reach this goal, a RAM Roadmap was developed highlighting the actions to be taken to ensure that various milestones in system development and maturation concurrently meet required availability requirements. Integral to the RAM Roadmap was the use of a RAM analytical/simulation tool which was used to estimate the availability of the system when deployed based on current design configuration and the maturation level of the system.« less

  2. Key Performance Parameter Driven Technology Goals for Electric Machines and Power Systems

    NASA Technical Reports Server (NTRS)

    Bowman, Cheryl; Jansen, Ralph; Brown, Gerald; Duffy, Kirsten; Trudell, Jeffrey

    2015-01-01

    Transitioning aviation to low carbon propulsion is one of the crucial strategic research thrust and is a driver in the search for alternative propulsion system for advanced aircraft configurations. This work requires multidisciplinary skills coming from multiple entities. The feasibility of scaling up various electric drive system technologies to meet the requirements of a large commercial transport is discussed in terms of key parameters. Functional requirements are identified that impact the power system design. A breakeven analysis is presented to find the minimum allowable electric drive specific power and efficiency that can preserve the range, initial weight, operating empty weight, and payload weight of the base aircraft.

  3. System simulation of direct-current speed regulation based on Simulink

    NASA Astrophysics Data System (ADS)

    Yang, Meiying

    2018-06-01

    Many production machines require the smooth adjustment of speed in a certain range In the process of modern industrial production, and require good steady-state and dynamic performance. Direct-current speed regulation system with wide speed regulation range, small relative speed variation, good stability, large overload capacity, can bear the frequent impact load, can realize stepless rapid starting-braking and inversion of frequency and other good dynamic performances, can meet the different kinds of special operation requirements in production process of automation system. The direct-current power drive system is almost always used in the field of drive technology of high performance for a long time.

  4. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, Roy H.; Beckman-Davies, C. S.; Benzinger, L.; Beshers, G.; Laliberte, D.; Render, H.; Sum, R.; Smith, W.; Terwilliger, R.

    1986-01-01

    Research into software development is required to reduce its production cost and to improve its quality. Modern software systems, such as the embedded software required for NASA's space station initiative, stretch current software engineering techniques. The requirements to build large, reliable, and maintainable software systems increases with time. Much theoretical and practical research is in progress to improve software engineering techniques. One such technique is to build a software system or environment which directly supports the software engineering process, i.e., the SAGA project, comprising the research necessary to design and build a software development which automates the software engineering process. Progress under SAGA is described.

  5. Software Reliability Issues Concerning Large and Safety Critical Software Systems

    NASA Technical Reports Server (NTRS)

    Kamel, Khaled; Brown, Barbara

    1996-01-01

    This research was undertaken to provide NASA with a survey of state-of-the-art techniques using in industrial and academia to provide safe, reliable, and maintainable software to drive large systems. Such systems must match the complexity and strict safety requirements of NASA's shuttle system. In particular, the Launch Processing System (LPS) is being considered for replacement. The LPS is responsible for monitoring and commanding the shuttle during test, repair, and launch phases. NASA built this system in the 1970's using mostly hardware techniques to provide for increased reliability, but it did so often using custom-built equipment, which has not been able to keep up with current technologies. This report surveys the major techniques used in industry and academia to ensure reliability in large and critical computer systems.

  6. Energy Storage Requirements for Achieving 50% Penetration of Solar Photovoltaic Energy in California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denholm, Paul; Margolis, Robert

    2016-09-01

    We estimate the storage required to enable PV penetration up to 50% in California (with renewable penetration over 66%), and we quantify the complex relationships among storage, PV penetration, grid flexibility, and PV costs due to increased curtailment. We find that the storage needed depends strongly on the amount of other flexibility resources deployed. With very low-cost PV (three cents per kilowatt-hour) and a highly flexible electric power system, about 19 gigawatts of energy storage could enable 50% PV penetration with a marginal net PV levelized cost of energy (LCOE) comparable to the variable costs of future combined-cycle gas generatorsmore » under carbon constraints. This system requires extensive use of flexible generation, transmission, demand response, and electrifying one quarter of the vehicle fleet in California with largely optimized charging. A less flexible system, or more expensive PV would require significantly greater amounts of storage. The amount of storage needed to support very large amounts of PV might fit within a least-cost framework driven by declining storage costs and reduced storage-duration needs due to high PV penetration.« less

  7. Energy Storage Requirements for Achieving 50% Solar Photovoltaic Energy Penetration in California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denholm, Paul; Margolis, Robert

    2016-08-01

    We estimate the storage required to enable PV penetration up to 50% in California (with renewable penetration over 66%), and we quantify the complex relationships among storage, PV penetration, grid flexibility, and PV costs due to increased curtailment. We find that the storage needed depends strongly on the amount of other flexibility resources deployed. With very low-cost PV (three cents per kilowatt-hour) and a highly flexible electric power system, about 19 gigawatts of energy storage could enable 50% PV penetration with a marginal net PV levelized cost of energy (LCOE) comparable to the variable costs of future combined-cycle gas generatorsmore » under carbon constraints. This system requires extensive use of flexible generation, transmission, demand response, and electrifying one quarter of the vehicle fleet in California with largely optimized charging. A less flexible system, or more expensive PV would require significantly greater amounts of storage. The amount of storage needed to support very large amounts of PV might fit within a least-cost framework driven by declining storage costs and reduced storage-duration needs due to high PV penetration.« less

  8. Enhanced job control language procedures for the SIMSYS2D two-dimensional water-quality simulation system

    USGS Publications Warehouse

    Karavitis, G.A.

    1984-01-01

    The SIMSYS2D two-dimensional water-quality simulation system is a large-scale digital modeling software system used to simulate flow and transport of solutes in freshwater and estuarine environments. Due to the size, processing requirements, and complexity of the system, there is a need to easily move the system and its associated files between computer sites when required. A series of job control language (JCL) procedures was written to allow transferability between IBM and IBM-compatible computers. (USGS)

  9. A large-stroke cryogenic imaging FTS system for SPICA-Safari

    NASA Astrophysics Data System (ADS)

    Jellema, Willem; van Loon, Dennis; Naylor, David; Roelfsema, Peter

    2014-08-01

    The scientific goals of the far-infrared astronomy mission SPICA challenge the design of a large-stroke imaging FTS for Safari, inviting for the development of a new generation of cryogenic actuators with very low dissipation. In this paper we present the Fourier Transform Spectrometer (FTS) system concept, as foreseen for SPICA-Safari, and we discuss the technical developments required to satisfy the instrument performance.

  10. The Numerical Propulsion System Simulation: An Overview

    NASA Technical Reports Server (NTRS)

    Lytle, John K.

    2000-01-01

    Advances in computational technology and in physics-based modeling are making large-scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze major propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of designing systems, providing the designer with critical information about the components early in the design process. This paper describes the development of the numerical propulsion system simulation (NPSS), a modular and extensible framework for the integration of multicomponent and multidisciplinary analysis tools using geographically distributed resources such as computing platforms, data bases, and people. The analysis is currently focused on large-scale modeling of complete aircraft engines. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.

  11. The BaBar Data Reconstruction Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceseracciu, A

    2005-04-20

    The BaBar experiment is characterized by extremely high luminosity and very large volume of data produced and stored, with increasing computing requirements each year. To fulfill these requirements a Control System has been designed and developed for the offline distributed data reconstruction system. The control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of OO design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system ismore » distributed in a hierarchical way: the top-level system is organized in farms, farms in services, and services in subservices or code modules. It provides a powerful Finite State Machine framework to describe custom processing models in a simple regular language. This paper describes the design and evolution of this control system, currently in use at SLAC and Padova on {approx}450 CPUs organized in 9 farms.« less

  12. The BaBar Data Reconstruction Control System

    NASA Astrophysics Data System (ADS)

    Ceseracciu, A.; Piemontese, M.; Tehrani, F. S.; Pulliam, T. M.; Galeazzi, F.

    2005-08-01

    The BaBar experiment is characterized by extremely high luminosity and very large volume of data produced and stored, with increasing computing requirements each year. To fulfill these requirements a control system has been designed and developed for the offline distributed data reconstruction system. The control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of object oriented (OO) design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system is distributed in a hierarchical way: the top-level system is organized in farms, farms in services, and services in subservices or code modules. It provides a powerful finite state machine framework to describe custom processing models in a simple regular language. This paper describes the design and evolution of this control system, currently in use at SLAC and Padova on /spl sim/450 CPUs organized in nine farms.

  13. Space Vehicle Power System Comprised of Battery/Capacitor Combinations

    NASA Technical Reports Server (NTRS)

    Camarotte, C.; Lancaster, G. S.; Eichenberg, D.; Butler, S. M.; Miller, J. R.

    2002-01-01

    Recent improvements in energy densities of batteries open the possibility of using electric rather that hydraulic actuators in space vehicle systems. However, the systems usually require short-duration, high-power pulses. This power profile requires the battery system to be sized to meet the power requirements rather than stored energy requirements, often resulting in a large and inefficient energy storage system. Similar transient power applications have used a combination of two or more disparate energy storage technologies. For instance, placing a capacitor and a battery side-by-side combines the high energy density of a battery with the high power performance of a capacitor and thus can create a lighter and more compact system. A parametric study was performed to identify favorable scenarios for using capacitors. System designs were then carried out using equivalent circuit models developed for five commercial electrochemical capacitor products. Capacitors were sized to satisfy peak power levels and consequently "leveled" the power requirement of the battery, which can then be sized to meet system energy requirements. Simulation results clearly differentiate the performance offered by available capacitor products for the space vehicle applications.

  14. Simulation and analysis of soil-water conditions in the Great Plains and adjacent areas, central United States, 1951-80

    USGS Publications Warehouse

    Dugan, Jack T.; Zelt, Ronald B.

    2000-01-01

    Ground-water recharge and consumptive-irrigation requirements in the Great Plains and adjacent areas largely depend upon an environment extrinsic to the ground-water system. This extrinsic environment, which includes climate, soils, and vegetation, determines the water demands of evapotranspiration, the availability of soil water to meet these demands, and the quantity of soil water remaining for potential ground-water recharge after these demands are met. The geographic extent of the Great Plains contributes to large regional differences among all elements composing the extrinsic environment, particularly the climatic factors. A soil-water simulation program, SWASP, which synthesizes selected climatic, soil, and vegetation factors, was used to simulate the regional soil-water conditions during 1951-80. The output from SWASP consists of several soil-water characteristics, including surface runoff, infiltration, consumptive water requirements, actual evapotranspiration, potential recharge or deep percolation under various conditions, consumptive irrigation requirements, and net fluxes from the ground-water system under irrigated conditions. Simulation results indicate that regional patterns of potential recharge, consumptive irrigation requirements, and net fluxes from the ground-water system under irrigated conditions are largely determined by evapotranspiration and precipitation. The local effects of soils and vegetation on potential recharge cause potential recharge to vary by more than 50 percent in some areas having similar climatic conditions.

  15. Windvan laser study

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The goal of defining a CO2 laser transmitter approach suited to Shuttle Coherent Atmospheric Lidar Experiment (SCALE) requirements is discussed. The adaptation of the existing WINDVAN system to the shuttle environment is addressed. The size, weight, reliability, and efficiency of the existing WINDVAN system are largely compatible with SCALE requirements. Repacking is needed for compatibility with vacuum and thermal environments. Changes are required to ensure survival through launch and landing, mechanical, vibration, and acoustic loads. Existing WINDVAN thermal management approaches depending on convection need to be upgraded zero gravity operations.

  16. Satellite Power Systems (SPS). LSST systems and integration task for SPS flight test article

    NASA Technical Reports Server (NTRS)

    Greenberg, H. S.

    1981-01-01

    This research activity emphasizes the systems definition and resulting structural requirements for the primary structure of two potential SPS large space structure test articles. These test articles represent potential steps in the SPS research and technology development.

  17. Thermal Environment for Classrooms. Central System Approach to Air Conditioning.

    ERIC Educational Resources Information Center

    Triechler, Walter W.

    This speech compares the air conditioning requirements of high-rise office buildings with those of large centralized school complexes. A description of one particular air conditioning system provides information about the system's arrangement, functions, performance efficiency, and cost effectiveness. (MLF)

  18. Solar electric propulsion and interorbital transportation

    NASA Technical Reports Server (NTRS)

    Austin, R. E.

    1978-01-01

    In-house MSFC and contracted systems studies have evaluated the requirements associated with candidate SEP missions and the results point to a standard system approach for both program flexibility and economy. The prospects for economical space transportation in the 1980s have already provided a stimulus for Space Industrialization (SI) planning. Two SI initiatives that are used as examples for interorbital transportation requirements are discussed - Public Service Platforms and Satellite Power System. The interorbital requirements for SI range from support of manned geosynchronous missions to transfers of bulk cargo and large-delicate space structures from low earth orbit to geosynchronous orbit.

  19. Sidewall-box airlift pump provides large flows for aeration, CO2 stripping, and water rotation in large dual-drain circular tanks

    USDA-ARS?s Scientific Manuscript database

    Conventional gas transfer technologies for aquaculture systems occupy a large amount of space, require a considerable capital investment, and can contribute to high electricity demand. In addition, diffused aeration in a circular culture tank can interfere with the hydrodynamics of water rotation a...

  20. Large-Scale Multiobjective Static Test Generation for Web-Based Testing with Integer Programming

    ERIC Educational Resources Information Center

    Nguyen, M. L.; Hui, Siu Cheung; Fong, A. C. M.

    2013-01-01

    Web-based testing has become a ubiquitous self-assessment method for online learning. One useful feature that is missing from today's web-based testing systems is the reliable capability to fulfill different assessment requirements of students based on a large-scale question data set. A promising approach for supporting large-scale web-based…

  1. MIDAS prototype Multispectral Interactive Digital Analysis System for large area earth resources surveys. Volume 2: Charge coupled device investigation

    NASA Technical Reports Server (NTRS)

    Kriegler, F.; Marshall, R.; Sternberg, S.

    1976-01-01

    MIDAS is a third-generation, fast, low cost, multispectral recognition system able to keep pace with the large quantity and high rates of data acquisition from large regions with present and projected sensors. MIDAS, for example, can process a complete ERTS frame in forty seconds and provide a color map of sixteen constituent categories in a few minutes. A principal objective of the MIDAS Program is to provide a system well interfaced with the human operator and thus to obtain large overall reductions in turn-around time and significant gains in throughput. The need for advanced onboard spacecraft processing of remotely sensed data is stated and approaches to this problem are described which are feasible through the use of charge coupled devices. Tentative mechanizations for the required processing operations are given in large block form. These initial designs can serve as a guide to circuit/system designers.

  2. Developing closed life support systems for large space habitats

    NASA Technical Reports Server (NTRS)

    Phillips, J. M.; Harlan, A. D.; Krumhar, K. C.

    1978-01-01

    In anticipation of possible large-scale, long-duration space missions which may be conducted in the future, NASA has begun to investigate the research and technology development requirements to create life support systems for large space habitats. An analysis suggests the feasibility of a regeneration of food in missions which exceed four years duration. Regeneration of food in space may be justified for missions of shorter duration when large crews must be supported at remote sites such as lunar bases and space manufacturing facilities. It is thought that biological components consisting principally of traditional crop and livestock species will prove to be the most acceptable means of closing the food cycle. A description is presented of the preliminary results of a study of potential biological components for large space habitats. Attention is given to controlled ecosystems, Russian life support system research, controlled-environment agriculture, and the social aspects of the life-support system.

  3. Space construction system analysis. Part 2: Space construction experiments concepts

    NASA Technical Reports Server (NTRS)

    Boddy, J. A.; Wiley, L. F.; Gimlich, G. W.; Greenberg, H. S.; Hart, R. J.; Lefever, A. E.; Lillenas, A. N.; Totah, R. S.

    1980-01-01

    Technology areas in the orbital assembly of large space structures are addressed. The areas included structures, remotely operated assembly techniques, and control and stabilization. Various large space structure design concepts are reviewed and their construction procedures and requirements are identified.

  4. Reliability-Based Electronics Shielding Design Tools

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; O'Neill, P. J.; Zang, T. A.; Pandolf, J. E.; Tripathi, R. K.; Koontz, Steven L.; Boeder, P.; Reddell, B.; Pankop, C.

    2007-01-01

    Shielding design on large human-rated systems allows minimization of radiation impact on electronic systems. Shielding design tools require adequate methods for evaluation of design layouts, guiding qualification testing, and adequate follow-up on final design evaluation.

  5. Parallel-Processing Test Bed For Simulation Software

    NASA Technical Reports Server (NTRS)

    Blech, Richard; Cole, Gary; Townsend, Scott

    1996-01-01

    Second-generation Hypercluster computing system is multiprocessor test bed for research on parallel algorithms for simulation in fluid dynamics, electromagnetics, chemistry, and other fields with large computational requirements but relatively low input/output requirements. Built from standard, off-shelf hardware readily upgraded as improved technology becomes available. System used for experiments with such parallel-processing concepts as message-passing algorithms, debugging software tools, and computational steering. First-generation Hypercluster system described in "Hypercluster Parallel Processor" (LEW-15283).

  6. Achieving Continuous Improvement: Theories that Support a System Change.

    ERIC Educational Resources Information Center

    Armel, Donald

    Focusing on improvement is different than focusing on quality, quantity, customer satisfaction, and productivity. This paper discusses Open System Theory, and suggests ways to change large systems. Changing a system (meaning the way all the parts are connected) requires a considerable amount of data gathering and analysis. Choosing the proper…

  7. Market scenarios and alternative administrative frameworks for US educational satellite systems

    NASA Technical Reports Server (NTRS)

    Walkmeyer, J. E., Jr.; Morgan, R. P.; Singh, J. P.

    1975-01-01

    Costs and benefits of developing an operational educational satellite system in the U.S. are analyzed. Scenarios are developed for each educational submarket and satellite channel and ground terminal requirements for a large-scale educational telecommunications system are estimated. Alternative organizational frameworks for such a system are described.

  8. Testing of a controller for a hybrid capillary pumped loop thermal control system

    NASA Technical Reports Server (NTRS)

    Schweickart, Russell; Ottenstein, Laura; Cullimore, Brent; Egan, Curtis; Wolf, Dave

    1989-01-01

    A controller for a series hybrid capillary pumped loop (CPL) system that requires no moving parts does not resrict fluid flow has been tested and has demonstrated improved performance characteristics over a plain CPL system and simple hybrid CPL systems. These include heat load sharing, phase separation, self-regulated flow control and distribution, all independent of most system pressure drop. In addition, the controlled system demonstrated a greater heat transport capability than the simple CPL system but without the large fluid inventory requirement of the hybrid systems. A description of the testing is presented along with data that show the advantages of the system.

  9. Multi-agent based control of large-scale complex systems employing distributed dynamic inference engine

    NASA Astrophysics Data System (ADS)

    Zhang, Daili

    Increasing societal demand for automation has led to considerable efforts to control large-scale complex systems, especially in the area of autonomous intelligent control methods. The control system of a large-scale complex system needs to satisfy four system level requirements: robustness, flexibility, reusability, and scalability. Corresponding to the four system level requirements, there arise four major challenges. First, it is difficult to get accurate and complete information. Second, the system may be physically highly distributed. Third, the system evolves very quickly. Fourth, emergent global behaviors of the system can be caused by small disturbances at the component level. The Multi-Agent Based Control (MABC) method as an implementation of distributed intelligent control has been the focus of research since the 1970s, in an effort to solve the above-mentioned problems in controlling large-scale complex systems. However, to the author's best knowledge, all MABC systems for large-scale complex systems with significant uncertainties are problem-specific and thus difficult to extend to other domains or larger systems. This situation is partly due to the control architecture of multiple agents being determined by agent to agent coupling and interaction mechanisms. Therefore, the research objective of this dissertation is to develop a comprehensive, generalized framework for the control system design of general large-scale complex systems with significant uncertainties, with the focus on distributed control architecture design and distributed inference engine design. A Hybrid Multi-Agent Based Control (HyMABC) architecture is proposed by combining hierarchical control architecture and module control architecture with logical replication rings. First, it decomposes a complex system hierarchically; second, it combines the components in the same level as a module, and then designs common interfaces for all of the components in the same module; third, replications are made for critical agents and are organized into logical rings. This architecture maintains clear guidelines for complexity decomposition and also increases the robustness of the whole system. Multiple Sectioned Dynamic Bayesian Networks (MSDBNs) as a distributed dynamic probabilistic inference engine, can be embedded into the control architecture to handle uncertainties of general large-scale complex systems. MSDBNs decomposes a large knowledge-based system into many agents. Each agent holds its partial perspective of a large problem domain by representing its knowledge as a Dynamic Bayesian Network (DBN). Each agent accesses local evidence from its corresponding local sensors and communicates with other agents through finite message passing. If the distributed agents can be organized into a tree structure, satisfying the running intersection property and d-sep set requirements, globally consistent inferences are achievable in a distributed way. By using different frequencies for local DBN agent belief updating and global system belief updating, it balances the communication cost with the global consistency of inferences. In this dissertation, a fully factorized Boyen-Koller (BK) approximation algorithm is used for local DBN agent belief updating, and the static Junction Forest Linkage Tree (JFLT) algorithm is used for global system belief updating. MSDBNs assume a static structure and a stable communication network for the whole system. However, for a real system, sub-Bayesian networks as nodes could be lost, and the communication network could be shut down due to partial damage in the system. Therefore, on-line and automatic MSDBNs structure formation is necessary for making robust state estimations and increasing survivability of the whole system. A Distributed Spanning Tree Optimization (DSTO) algorithm, a Distributed D-Sep Set Satisfaction (DDSSS) algorithm, and a Distributed Running Intersection Satisfaction (DRIS) algorithm are proposed in this dissertation. Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.

  10. Flexibility of space structures makes design shaky

    NASA Technical Reports Server (NTRS)

    Hearth, D. P.; Boyer, W. J.

    1985-01-01

    An evaluation is made of the development status of high stiffness space structures suitable for orbital construction or deployment of large diameter reflector antennas, with attention to the control system capabilities required by prospective space structure system types. The very low structural frequencies typical of very large, radio frequency antenna structures would be especially difficult for a control system to counteract. Vibration control difficulties extend across the frequency spectrum, even to optical and IR reflector systems. Current research and development efforts are characterized with respect to goals and prospects for success.

  11. RICIS research

    NASA Technical Reports Server (NTRS)

    Mckay, Charles W.; Feagin, Terry; Bishop, Peter C.; Hallum, Cecil R.; Freedman, Glenn B.

    1987-01-01

    The principle focus of one of the RICIS (Research Institute for Computing and Information Systems) components is computer systems and software engineering in-the-large of the lifecycle of large, complex, distributed systems which: (1) evolve incrementally over a long time; (2) contain non-stop components; and (3) must simultaneously satisfy a prioritized balance of mission and safety critical requirements at run time. This focus is extremely important because of the contribution of the scaling direction problem to the current software crisis. The Computer Systems and Software Engineering (CSSE) component addresses the lifestyle issues of three environments: host, integration, and target.

  12. Design studies of large aperture, high-resolution Earth science microwave radiometers compatible with small launch vehicles

    NASA Technical Reports Server (NTRS)

    Schroeder, Lyle C.; Bailey, M. C.; Harrington, Richard F.; Kendall, Bruce M.; Campbell, Thomas G.

    1994-01-01

    High-spatial-resolution microwave radiometer sensing from space with reasonable swath widths and revisit times favors large aperture systems. However, with traditional precision antenna design, the size and weight requirements for such systems are in conflict with the need to emphasize small launch vehicles. This paper describes tradeoffs between the science requirements, basic operational parameters, and expected sensor performance for selected satellite radiometer concepts utilizing novel lightweight compactly packaged real apertures. Antenna, feed, and radiometer subsystem design and calibration are presented. Preliminary results show that novel lightweight real aperture coupled with state-of-the-art radiometer designs are compatible with small launch systems, and hold promise for high-resolution earth science measurements of sea ice, precipitation, soil moisture, sea surface temperature, and ocean wind speeds.

  13. High voltage cabling for high power spacecraft

    NASA Technical Reports Server (NTRS)

    Dunbar, W. G.

    1981-01-01

    Studies by NASA have shown that many of the space missions proposed for the time period 1980 to 2000 will require large spacecraft structures to be assembled in orbit. Large antennas and power systems up to 2.5 MW size are predicted to supply the electrical/electronic subsystems, solar electric subsystems, solar electric propulsion, and space processing for the near-term programs. Platforms of 100 meters/length for stable foundations, utility stations, and supports for these multi-antenna and electronic powered mechanisms are also being considered. This paper includes the findings of an analytic and conceptual design study for large spacecraft power distribution, and electrical loads and their influence on the cable and connector requirements for these proposed large spacecraft.

  14. LDR cryogenics

    NASA Technical Reports Server (NTRS)

    Nast, T.

    1988-01-01

    A brief summary from the 1985 Large Deployable Reflector (LDR) Asilomar 2 workshop of the requirements for LDR cryogenic cooling is presented. The heat rates are simply the sum of the individual heat rates from the instruments. Consideration of duty cycle will have a dramatic effect on cooling requirements. There are many possible combinations of cooling techniques for each of the three temperatures zones. It is clear that much further system study is needed to determine what type of cooling system is required (He-2, hybrid or mechanical) and what size and power is required. As the instruments, along with their duty cycles and heat rates, become better defined it will be possible to better determine the optimum cooling systems.

  15. Instrument control software requirement specification for Extremely Large Telescopes

    NASA Astrophysics Data System (ADS)

    Young, Peter J.; Kiekebusch, Mario J.; Chiozzi, Gianluca

    2010-07-01

    Engineers in several observatories are now designing the next generation of optical telescopes, the Extremely Large Telescopes (ELT). These are very complex machines that will host sophisticated astronomical instruments to be used for a wide range of scientific studies. In order to carry out scientific observations, a software infrastructure is required to orchestrate the control of the multiple subsystems and functions. This paper will focus on describing the considerations, strategies and main issues related to the definition and analysis of the software requirements for the ELT's Instrument Control System using modern development processes and modelling tools like SysML.

  16. Space Shuttle Solid Rocket Booster decelerator subsystem - Air drop test vehicle/B-52 design

    NASA Technical Reports Server (NTRS)

    Runkle, R. E.; Drobnik, R. F.

    1979-01-01

    The air drop development test program for the Space Shuttle Solid Rocket Booster Recovery System required the design of a large drop test vehicle that would meet all the stringent requirements placed on it by structural loads, safety considerations, flight recovery system interfaces, and sequence. The drop test vehicle had to have the capability to test the drogue and the three main parachutes both separately and in the total flight deployment sequence and still be low-cost to fit in a low-budget development program. The design to test large ribbon parachutes to loads of 300,000 pounds required the detailed investigation and integration of several parameters such as carrier aircraft mechanical interface, drop test vehicle ground transportability, impact point ground penetration, salvageability, drop test vehicle intelligence, flight design hardware interfaces, and packaging fidelity.

  17. Modelling Pasture-based Automatic Milking System Herds: System Fitness of Grazeable Home-grown Forages, Land Areas and Walking Distances

    PubMed Central

    Islam, M. R.; Garcia, S. C.; Clark, C. E. F.; Kerrisk, K. L.

    2015-01-01

    To maintain a predominantly pasture-based system, the large herd milked by automatic milking rotary would be required to walk significant distances. Walking distances of greater than 1-km are associated with an increased incidence of undesirably long milking intervals and reduced milk yield. Complementary forages can be incorporated into pasture-based systems to lift total home grown feed in a given area, thus potentially ‘concentrating’ feed closer to the dairy. The aim of this modelling study was to investigate the total land area required and associated walking distance for large automatic milking system (AMS) herds when incorporating complementary forage rotations (CFR) into the system. Thirty-six scenarios consisting of 3 AMS herds (400, 600, 800 cows), 2 levels of pasture utilisation (current AMS utilisation of 15.0 t dry matter [DM]/ha, termed as moderate; optimum pasture utilisation of 19.7 t DM/ha, termed as high) and 6 rates of replacement of each of these pastures by grazeable CFR (0%, 10%, 20%, 30%, 40%, 50%) were investigated. Results showed that AMS cows were required to walk greater than 1-km when the farm area was greater than 86 ha. Insufficient pasture could be produced within a 1 km distance (i.e. 86 ha land) with home-grown feed (HGF) providing 43%, 29%, and 22% of the metabolisable energy (ME) required by 400, 600, and 800 cows, respectively from pastures. Introduction of pasture (moderate): CFR in AMS at a ratio of 80:20 can feed a 400 cow AMS herd, and can supply 42% and 31% of the ME requirements for 600 and 800 cows, respectively with pasture (moderate): CFR at 50:50 levels. In contrast to moderate pasture, 400 cows can be managed on high pasture utilisation (provided 57% of the total ME requirements). However, similar to the scenarios conducted with moderate pasture, there was insufficient feed produced within 1-km distance of the dairy for 600 or 800 cows. An 800 cow herd required 140 and 130 ha on moderate and high pasture-based AMS system, respectively with the introduction of pasture: CFR at a ratio of 50:50. Given the impact of increasing land area past 86 ha on walking distance, cow numbers could be increased by purchasing feed from off the milking platform and/or using the land outside 1-km distance for conserved feed. However, this warrants further investigations into risk analyses of different management options including development of an innovative system to manage large herds in an AMS farming system. PMID:25925068

  18. Spacecraft systems engineering: An introduction to the process at GSFC

    NASA Technical Reports Server (NTRS)

    Fragomeni, Tony; Ryschkewitsch, Michael G.

    1993-01-01

    The main objective in systems engineering is to devise a coherent total system design capable of achieving the stated requirements. Requirements should be rigid. However, they should be continuously challenged, rechallenged and/or validated. The systems engineer must specify every requirement in order to design, document, implement and conduct the mission. Each and every requirement must be logically considered, traceable and evaluated through various analysis and trade studies in a total systems design. Margins must be determined to be realistic as well as adequate. The systems engineer must also continuously close the loop and verify system performance against the requirements. The fundamental role of the systems engineer, however, is to engineer, not manage. Yet, in large, complex missions, where more than one systems engineer is required, someone needs to manage the systems engineers, and we call them 'systems managers.' Systems engineering management is an overview function which plans, guides, monitors and controls the technical execution of a project as implemented by the systems engineers. As the project moves on through Phases A and B into Phase C/D, the systems engineering tasks become a small portion of the total effort. The systems management role increases since discipline subsystem engineers are conducting analyses and reviewing test data for final review and acceptance by the systems managers.

  19. Stress Free Temperature Testing and Residual Stress Calculations on Out-of-Autoclave Composites

    NASA Technical Reports Server (NTRS)

    Cox, Sarah; Tate, LaNetra C.; Danley, Susan; Sampson, Jeff; Taylor, Brian; Miller, Sandi

    2012-01-01

    Future launch vehicles will require the incorporation large composite parts that will make up primary and secondary components of the vehicle. NASA has explored the feasibility of manufacturing these large components using Out-of-Autoclave impregnated carbon fiber composite systems through many composites development projects. Most recently, the Composites for Exploration Project has been looking at the development of a 10 meter diameter fairing structure, similar in size to what will be required for a heavy launch vehicle. The development of new material systems requires the investigation of the material properties and the stress in the parts. Residual stress is an important factor to incorporate when modeling the stresses that a part is undergoing. Testing was performed to verify the stress free temperature with two-ply asymmetric panels. A comparison was done between three newly developed out of autoclave IM7 /Bismalieimide (BMI) systems. This paper presents the testing results and the analysis performed to determine the residual stress of the materials.

  20. Stress Free Temperature Testing and Calculations on Out-of-Autoclave Composites

    NASA Technical Reports Server (NTRS)

    Cox, Sarah B.; Tate, LeNetra C.; Danley, Susan E.; Sampson, Jeffrey W.; Taylor, Brian J.; Sutter, James K.; Miller, Sandi G.

    2013-01-01

    Future launch vehicles will require the incorporation of large composite parts that will make up primary and secondary components of the vehicle. NASA has explored the feasibility of manufacturing these large components using Out-of-Autoclave impregnated carbon fiber composite systems through many composites development projects. Most recently, the Composites for Exploration Project has been looking at the development of a 10 meter diameter fairing structure, similar in size to what will be required for a heavy launch vehicle. The development of new material systems requires the investigation of the material properties and the stress in the parts. Residual stress is an important factor to incorporate when modeling the stresses that a part is undergoing. Testing was performed to verify the stress free temperature with two-ply asymmetric panels. A comparison was done between three newly developed out of autoclave IM7/Bismaleimide (BMI) systems. This paper presents the testing results and the analysis performed to determine the stress free temperature of the materials

  1. AMTD - Advanced Mirror Technology Development in Mechanical Stability

    NASA Technical Reports Server (NTRS)

    Knight, J. Brent

    2015-01-01

    Analytical tools and processes are being developed at NASA Marshal Space Flight Center in support of the Advanced Mirror Technology Development (AMTD) project. One facet of optical performance is mechanical stability with respect to structural dynamics. Pertinent parameters are: (1) the spacecraft structural design, (2) the mechanical disturbances on-board the spacecraft (sources of vibratory/transient motion such as reaction wheels), (3) the vibration isolation systems (invariably required to meet future science needs), and (4) the dynamic characteristics of the optical system itself. With stability requirements of future large aperture space telescopes being in the lower Pico meter regime, it is paramount that all sources of mechanical excitation be considered in both feasibility studies and detailed analyses. The primary objective of this paper is to lay out a path to perform feasibility studies of future large aperture space telescope projects which require extreme stability. To get to that end, a high level overview of a structural dynamic analysis process to assess an integrated spacecraft and optical system is included.

  2. A SEASAT-A synthetic aperture imaging radar system

    NASA Technical Reports Server (NTRS)

    Jordan, R. L.; Rodgers, D. H.

    1975-01-01

    The SEASAT, a synthetic aperture imaging radar system is the first radar system of its kind designed for the study of ocean wave patterns from orbit. The basic requirement of this system is to generate continuous radar imagery with a 100 km swath with 25m resolution from an orbital altitude of 800 km. These requirements impose unique system design problems. The end to end data system described including interactions of the spacecraft, antenna, sensor, telemetry link, and data processor. The synthetic aperture radar system generates a large quantity of data requiring the use of an analog link with stable local oscillator encoding. The problems associated in telemetering the radar information with sufficient fidelity to synthesize an image on the ground is described as well as the selected solutions to the problems.

  3. The Earth Phenomena Observing System: Intelligent Autonomy for Satellite Operations

    NASA Technical Reports Server (NTRS)

    Ricard, Michael; Abramson, Mark; Carter, David; Kolitz, Stephan

    2003-01-01

    Earth monitoring systems of the future may include large numbers of inexpensive small satellites, tasked in a coordinated fashion to observe both long term and transient targets. For best performance, a tool which helps operators optimally assign targets to satellites will be required. We present the design of algorithms developed for real-time optimized autonomous planning of large numbers of small single-sensor Earth observation satellites. The algorithms will reduce requirements on the human operators of such a system of satellites, ensure good utilization of system resources, and provide the capability to dynamically respond to temporal terrestrial phenomena. Our initial real-time system model consists of approximately 100 satellites and large number of points of interest on Earth (e.g., hurricanes, volcanoes, and forest fires) with the objective to maximize the total science value of observations over time. Several options for calculating the science value of observations include the following: 1) total observation time, 2) number of observations, and the 3) quality (a function of e.g., sensor type, range, slant angle) of the observations. An integrated approach using integer programming, optimization and astrodynamics is used to calculate optimized observation and sensor tasking plans.

  4. Simulation of a Moving Elastic Beam Using Hamilton’s Weak Principle

    DTIC Science & Technology

    2006-03-01

    versions were limited to two-dimensional systems with open tree configurations (where a cut in any component separates the system in half) [48]. This...whose com- ponents experienced large angular rotations (turbomachinery, camshafts , flywheels, etc.). More complex systems required the simultaneous

  5. Medical Information Management System (MIMS): A generalized interactive information system

    NASA Technical Reports Server (NTRS)

    Alterescu, S.; Friedman, C. A.; Hipkins, K. R.

    1975-01-01

    An interactive information system is described. It is a general purpose, free format system which offers immediate assistance where manipulation of large data bases is required. The medical area is a prime area of application. Examples of the system's operation, commentary on the examples, and a complete listing of the system program are included.

  6. Square Kilometre Array Science Data Processing

    NASA Astrophysics Data System (ADS)

    Nikolic, Bojan; SDP Consortium, SKA

    2014-04-01

    The Square Kilometre Array (SKA) is planned to be, by a large factor, the largest and most sensitive radio telescope ever constructed. The first phase of the telescope (SKA1), now in the design phase, will in itself represent a major leap in capabilities compared to current facilities. These advances are to a large extent being made possible by advances in available computer processing power so that that larger numbers of smaller, simpler and cheaper receptors can be used. As a result of greater reliance and demands on computing, ICT is becoming an ever more integral part of the telescope. The Science Data Processor is the part of the SKA system responsible for imaging, calibration, pulsar timing, confirmation of pulsar candidates, derivation of some further derived data products, archiving and providing the data to the users. It will accept visibilities at data rates at several TB/s and require processing power for imaging in range 100 petaFLOPS -- ~1 ExaFLOPS, putting SKA1 into the regime of exascale radio astronomy. In my talk I will present the overall SKA system requirements and how they drive these high data throughput and processing requirements. Some of the key challenges for the design of SDP are: - Identifying sufficient parallelism to utilise very large numbers of separate compute cores that will be required to provide exascale computing throughput - Managing efficiently the high internal data flow rates - A conceptual architecture and software engineering approach that will allow adaptation of the algorithms as we learn about the telescope and the atmosphere during the commissioning and operational phases - System management that will deal gracefully with (inevitably frequent) failures of individual units of the processing system In my talk I will present possible initial architectures for the SDP system that attempt to address these and other challenges.

  7. Memo Addressing Lead and Copper Rule Requirements for Optimal Corrosion Control Treatment

    EPA Pesticide Factsheets

    EPA has recently published a memo to address the requirements pertaining to maintenance of optimal corrosion control treatment, in situations in which a large water system ceases to purchase treated water and switches to a new drinking water source.

  8. Electrical System Technology Working Group (WG) Report

    NASA Technical Reports Server (NTRS)

    Silverman, S.; Ford, F. E.

    1984-01-01

    The technology needs for space power systems (military, public, commercial) were assessed for the period 1995 to 2005 in the area of power management and distribution, components, circuits, subsystems, controls and autonomy, modeling and simulation. There was general agreement that the military requirements for pulse power would be the dominant factor in the growth of power systems. However, the growth of conventional power to the 100 to 250kw range would be in the public sector, with low Earth orbit needs being the driver toward large 100kw systems. An overall philosophy for large power system development is also described.

  9. An overview of expert systems. [artificial intelligence

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1982-01-01

    An expert system is defined and its basic structure is discussed. The knowledge base, the inference engine, and uses of expert systems are discussed. Architecture is considered, including choice of solution direction, reasoning in the presence of uncertainty, searching small and large search spaces, handling large search spaces by transforming them and by developing alternative or additional spaces, and dealing with time. Existing expert systems are reviewed. Tools for building such systems, construction, and knowledge acquisition and learning are discussed. Centers of research and funding sources are listed. The state-of-the-art, current problems, required research, and future trends are summarized.

  10. Calibration method for a large-scale structured light measurement system.

    PubMed

    Wang, Peng; Wang, Jianmei; Xu, Jing; Guan, Yong; Zhang, Guanglie; Chen, Ken

    2017-05-10

    The structured light method is an effective non-contact measurement approach. The calibration greatly affects the measurement precision of structured light systems. To construct a large-scale structured light system with high accuracy, a large-scale and precise calibration gauge is always required, which leads to an increased cost. To this end, in this paper, a calibration method with a planar mirror is proposed to reduce the calibration gauge size and cost. An out-of-focus camera calibration method is also proposed to overcome the defocusing problem caused by the shortened distance during the calibration procedure. The experimental results verify the accuracy of the proposed calibration method.

  11. Microworld Simulations: A New Dimension in Training Army Logistics Management Skills

    DTIC Science & Technology

    2004-01-01

    Providing effective training to Army personnelis always challenging, but the Army facessome new challenges in training its logisticsstaff managers in...soldiers are stationed and where materiel and services are readily available. The design and management of the Army’s Combat Ser- vice Support (CSS) large...scale logistics systems are increasingly important. The skills that are required to manage these systems are difficult to train. Large deployments

  12. Hybrid Propulsion Technology Program

    NASA Technical Reports Server (NTRS)

    Jensen, G. E.; Holzman, A. L.

    1990-01-01

    Future launch systems of the United States will require improvements in booster safety, reliability, and cost. In order to increase payload capabilities, performance improvements are also desirable. The hybrid rocket motor (HRM) offers the potential for improvements in all of these areas. The designs are presented for two sizes of hybrid boosters, a large 4.57 m (180 in.) diameter booster duplicating the Advanced Solid Rocket Motor (ASRM) vacuum thrust-time profile and smaller 2.44 m (96 in.), one-quater thrust level booster. The large booster would be used in tandem, while eight small boosters would be used to achieve the same total thrust. These preliminary designs were generated as part of the NASA Hybrid Propulsion Technology Program. This program is the first phase of an eventual three-phaes program culminating in the demonstration of a large subscale engine. The initial trade and sizing studies resulted in preferred motor diameters, operating pressures, nozzle geometry, and fuel grain systems for both the large and small boosters. The data were then used for specific performance predictions in terms of payload and the definition and selection of the requirements for the major components: the oxidizer feed system, nozzle, and thrust vector system. All of the parametric studies were performed using realistic fuel regression models based upon specific experimental data.

  13. Bluetooth-based travel time/speed measuring systems development.

    DOT National Transportation Integrated Search

    2010-06-01

    Agencies in the Houston region have traditionally used toll tag readers to provide travel times on : freeways and High Occupancy Vehicle (HOV) lanes, but these systems require large amounts of costly and : physically invasive infrastructure. Bluetoot...

  14. Weight optimization of ultra large space structures

    NASA Technical Reports Server (NTRS)

    Reinert, R. P.

    1979-01-01

    The paper describes the optimization of a solar power satellite structure for minimum mass and system cost. The solar power satellite is an ultra large low frequency and lightly damped space structure; derivation of its structural design requirements required accommodation of gravity gradient torques which impose primary loads, life up to 100 years in the rigorous geosynchronous orbit radiation environment, and prevention of continuous wave motion in a solar array blanket suspended from a huge, lightly damped structure subject to periodic excitations. The satellite structural design required a parametric study of structural configurations and consideration of the fabrication and assembly techniques, which resulted in a final structure which met all requirements at a structural mass fraction of 10%.

  15. Inversion of very large matrices encountered in large scale problems of photogrammetry and photographic astrometry

    NASA Technical Reports Server (NTRS)

    Brown, D. C.

    1971-01-01

    The simultaneous adjustment of very large nets of overlapping plates covering the celestial sphere becomes computationally feasible by virtue of a twofold process that generates a system of normal equations having a bordered-banded coefficient matrix, and solves such a system in a highly efficient manner. Numerical results suggest that when a well constructed spherical net is subjected to a rigorous, simultaneous adjustment, the exercise of independently established control points is neither required for determinancy nor for production of accurate results.

  16. Ground test experiment for large space structures

    NASA Technical Reports Server (NTRS)

    Tollison, D. K.; Waites, H. B.

    1985-01-01

    In recent years a new body of control theory has been developed for the design of control systems for Large Space Structures (LSS). The problems of testing this theory on LSS hardware are aggravated by the expense and risk of actual in orbit tests. Ground tests on large space structures can provide a proving ground for candidate control systems, but such tests require a unique facility for their execution. The current development of such a facility at the NASA Marshall Space Flight Center (MSFC) is the subject of this report.

  17. The Large Synoptic Survey Telescope project management control system

    NASA Astrophysics Data System (ADS)

    Kantor, Jeffrey P.

    2012-09-01

    The Large Synoptic Survey Telescope (LSST) program is jointly funded by the NSF, the DOE, and private institutions and donors. From an NSF funding standpoint, the LSST is a Major Research Equipment and Facilities (MREFC) project. The NSF funding process requires proposals and D&D reviews to include activity-based budgets and schedules; documented basis of estimates; risk-based contingency analysis; cost escalation and categorization. "Out-of-the box," the commercial tool Primavera P6 contains approximately 90% of the planning and estimating capability needed to satisfy R&D phase requirements, and it is customizable/configurable for remainder with relatively little effort. We describe the customization/configuration and use of Primavera for the LSST Project Management Control System (PMCS), assess our experience to date, and describe future directions. Examples in this paper are drawn from the LSST Data Management System (DMS), which is one of three main subsystems of the LSST and is funded by the NSF. By astronomy standards the LSST DMS is a large data management project, processing and archiving over 70 petabyes of image data, producing over 20 petabytes of catalogs annually, and generating 2 million transient alerts per night. Over the 6-year construction and commissioning phase, the DM project is estimated to require 600,000 hours of engineering effort. In total, the DMS cost is approximately 60% hardware/system software and 40% labor.

  18. Advanced UVOIR Mirror Technology Development (AMTD) for Very Large Space Telescopes

    NASA Technical Reports Server (NTRS)

    Postman, Marc; Soummer, Remi; Sivramakrishnan, Annand; Macintosh, Bruce; Guyon, Olivier; Krist, John; Stahl, H. Philip; Smith, W. Scott; Mosier, Gary; Kirk, Charles; hide

    2013-01-01

    ASTRO2010 Decadal Survey stated that an advanced large-aperture ultraviolet, optical, near-infrared (UVOIR) telescope is required to enable the next generation of compelling astrophysics and exoplanet science; and, that present technology is not mature enough to affordably build and launch any potential UVOIR mission concept. AMTD is the start of a multiyear effort to develop, demonstrate and mature critical technologies to TRL-6 by 2018 so that a viable flight mission can be proposed to the 2020 Decadal Review. AMTD builds on the state of art (SOA) defined by over 30 years of monolithic & segmented ground & space-telescope mirror technology to mature six key technologies: (1) Large-Aperture, Low Areal Density, High Stiffness Mirror Substrates: Both (4 to 8 m) monolithic and (8 to 16 m) segmented primary mirrors require larger, thicker, and stiffer substrates. (2) Support System: Large-aperture mirrors require large support systems to ensure that they survive launch and deploy on orbit in a stress-free and undistorted shape. (3) Mid/High Spatial Frequency Figure Error: Very smooth mirror is critical for producing high-quality point spread function (PSF) for high contrast imaging. (4) Segment Edges: The quality of segment edges impacts PSF for high-contrast imaging applications, contributes to stray light noise, and affects total collecting aperture. (5) Segment to Segment Gap Phasing: Segment phasing is critical for producing high-quality temporally-stable PSF. (6) Integrated Model Validation: On-orbit performance is driven by mechanical & thermal stability. Compliance cannot be 100% tested, but relies on modeling. AMTD is pursuing multiple design paths to provide the science community with options to enable either large aperture monolithic or segmented mirrors with clear engineering metrics traceable to science requirements.

  19. Requirements and principles for the implementation and construction of large-scale geographic information systems

    NASA Technical Reports Server (NTRS)

    Smith, Terence R.; Menon, Sudhakar; Star, Jeffrey L.; Estes, John E.

    1987-01-01

    This paper provides a brief survey of the history, structure and functions of 'traditional' geographic information systems (GIS), and then suggests a set of requirements that large-scale GIS should satisfy, together with a set of principles for their satisfaction. These principles, which include the systematic application of techniques from several subfields of computer science to the design and implementation of GIS and the integration of techniques from computer vision and image processing into standard GIS technology, are discussed in some detail. In particular, the paper provides a detailed discussion of questions relating to appropriate data models, data structures and computational procedures for the efficient storage, retrieval and analysis of spatially-indexed data.

  20. A large high vacuum, high pumping speed space simulation chamber for electric propulsion

    NASA Technical Reports Server (NTRS)

    Grisnik, Stanley P.; Parkes, James E.

    1994-01-01

    Testing high power electric propulsion devices poses unique requirements on space simulation facilities. Very high pumping speeds are required to maintain high vacuum levels while handling large volumes of exhaust products. These pumping speeds are significantly higher than those available in most existing vacuum facilities. There is also a requirement for relatively large vacuum chamber dimensions to minimize facility wall/thruster plume interactions and to accommodate far field plume diagnostic measurements. A 4.57 m (15 ft) diameter by 19.2 m (63 ft) long vacuum chamber at NASA Lewis Research Center is described. The chamber utilizes oil diffusion pumps in combination with cryopanels to achieve high vacuum pumping speeds at high vacuum levels. The facility is computer controlled for all phases of operation from start-up, through testing, to shutdown. The computer control system increases the utilization of the facility and reduces the manpower requirements needed for facility operations.

  1. Underwater hydraulic shock shovel control system

    NASA Astrophysics Data System (ADS)

    Liu, He-Ping; Luo, A.-Ni; Xiao, Hai-Yan

    2008-06-01

    The control system determines the effectiveness of an underwater hydraulic shock shovel. This paper begins by analyzing the working principles of these shovels and explains the importance of their control systems. A new type of control system’s mathematical model was built and analyzed according to those principles. Since the initial control system’s response time could not fulfill the design requirements, a PID controller was added to the control system. System response time was still slower than required, so a neural network was added to nonlinearly regulate the proportional element, integral element and derivative element coefficients of the PID controller. After these improvements to the control system, system parameters fulfilled the design requirements. The working performance of electrically-controlled parts such as the rapidly moving high speed switch valve is largely determined by the control system. Normal control methods generally can’t satisfy a shovel’s requirements, so advanced and normal control methods were combined to improve the control system, bringing good results.

  2. A design study of a reaction control system for a V/STOL fighter/attack aircraft

    NASA Technical Reports Server (NTRS)

    Beard, B. B.; Foley, W. H.

    1983-01-01

    Attention is given to a short takeoff vertical landing (STOVL) aircraft reaction control system (RCS) design study. The STOVL fighter/attack aircraft employs an existing turbofan engine, and its hover requirement places a premium on weight reduction, which eliminates prospective nonairbreathing RCSs. A simple engine compressor bleed RCS degrades overall performance to an unacceptable degree, and the supersonic requirement precludes the large volume alternatives of thermal or ejector thrust augmentation systems as well as the ducting of engine exhaust gases and the use of a dedicated turbojet. The only system which addressed performance criteria without requiring major engine modifications was a dedicated load compressor driven by an auxilliary power unit.

  3. Proximity operations concept design study, task 6

    NASA Technical Reports Server (NTRS)

    Williams, A. N.

    1990-01-01

    The feasibility of using optical technology to perform the mission of the proximity operations communications subsystem on Space Station Freedom was determined. Proximity operations mission requirements are determined and the relationship to the overall operational environment of the space station is defined. From this information, the design requirements of the communication subsystem are derived. Based on these requirements, a preliminary design is developed and the feasibility of implementation determined. To support the Orbital Maneuvering Vehicle and National Space Transportation System, the optical system development is straightforward. The requirements on extra-vehicular activity are such as to allow large fields of uncertainty, thus exacerbating the acquisition problem; however, an approach is given that could mitigate this problem. In general, it is found that such a system could indeed perform the proximity operations mission requirement, with some development required to support extra-vehicular activity.

  4. Thermographic Imaging of Defects in Anisotropic Composites

    NASA Technical Reports Server (NTRS)

    Plotnikov, Y. A.; Winfree, W. P.

    2000-01-01

    Composite materials are of increasing interest to the aerospace industry as a result of their weight versus performance characteristics. One of the disadvantages of composites is the high cost of fabrication and post inspection with conventional ultrasonic scanning systems. The high cost of inspection is driven by the need for scanning systems which can follow large curve surfaces. Additionally, either large water tanks or water squirters are required to couple the ultrasonics into the part. Thermographic techniques offer significant advantages over conventional ultrasonics by not requiring physical coupling between the part and sensor. The thermographic system can easily inspect large curved surface without requiring a surface following scanner. However, implementation of Thermal Nondestructive Evaluations (TNDE) for flaw detection in composite materials and structures requires determining its limit. Advanced algorithms have been developed to enable locating and sizing defects in carbon fiber reinforced plastic (CFRP). Thermal Tomography is a very promising method for visualizing the size and location of defects in materials such as CFRP. However, further investigations are required to determine its capabilities for inspection of thick composites. In present work we have studied influence of the anisotropy on the reconstructed image of a defect generated by an inversion technique. The composite material is considered as homogeneous with macro properties: thermal conductivity K, specific heat c, and density rho. The simulation process involves two sequential steps: solving the three dimensional transient heat diffusion equation for a sample with a defect, then estimating the defect location and size from the surface spatial and temporal thermal distributions (inverse problem), calculated from the simulations.

  5. Systems Thinking for Transformational Change in Health

    ERIC Educational Resources Information Center

    Willis, Cameron D.; Best, Allan; Riley, Barbara; Herbert, Carol P.; Millar, John; Howland, David

    2014-01-01

    Incremental approaches to introducing change in Canada's health systems have not sufficiently improved the quality of services and outcomes. Further progress requires 'large system transformation', considered to be the systematic effort to generate coordinated change across organisations sharing a common vision and goal. This essay draws on…

  6. The Systems Revolution

    ERIC Educational Resources Information Center

    Ackoff, Russell L.

    1974-01-01

    The major organizational and social problems of our time do not lend themselves to the reductionism of traditional analytical and disciplinary approaches. They must be attacked holistically, with a comprehensive systems approach. The effective study of large-scale social systems requires the synthesis of science with the professions that use it.…

  7. Strategic Planning Tools for Large-Scale Technology-Based Assessments

    ERIC Educational Resources Information Center

    Koomen, Marten; Zoanetti, Nathan

    2018-01-01

    Education systems are increasingly being called upon to implement new technology-based assessment systems that generate efficiencies, better meet changing stakeholder expectations, or fulfil new assessment purposes. These assessment systems require coordinated organisational effort to implement and can be expensive in time, skill and other…

  8. Residential solar-heating system

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Complete residential solar-heating and hot-water system, when installed in highly-insulated energy-saver home, can supply large percentage of total energy demand for space heating and domestic hot water. System which uses water-heating energy storage can be scaled to meet requirements of building in which it is installed.

  9. A flexible and cost-effective compensation method for leveling using large-scale coordinate measuring machines and its application in aircraft digital assembly

    NASA Astrophysics Data System (ADS)

    Deng, Zhengping; Li, Shuanggao; Huang, Xiang

    2018-06-01

    In the assembly process of large-size aerospace products, the leveling and horizontal alignment of large components are essential prior to the installation of an inertial navigation system (INS) and the final quality inspection. In general, the inherent coordinate systems of large-scale coordinate measuring devices are not coincident with the geodetic horizontal system, and a dual-axis compensation system is commonly required for the measurement of difference in heights. These compensation systems are expensive and dedicated designs for different devices at present. Considering that a large-size assembly site usually needs more than one measuring device, a compensation approach which is versatile for different devices would be a more convenient and economic choice for manufacturers. In this paper, a flexible and cost-effective compensation method is proposed. Firstly, an auxiliary measuring device called a versatile compensation fixture (VCF) is designed, which mainly comprises reference points for coordinate transformation and a dual-axis inclinometer, and a kind of network tighten points (NTPs) are introduced and temporarily deployed in the large measuring space to further reduce transformation error. Secondly, the measuring principle of height difference is studied, based on coordinate transformation theory and trigonometry while considering the effects of earth curvature, and the coordinate transformation parameters are derived by least squares adjustment. Thirdly, the analytical solution of leveling uncertainty is analyzed, based on which the key parameters of the VCF and the proper deployment of NTPs are determined according to the leveling accuracy requirement. Furthermore, the proposed method is practically applied to the assembly of a large helicopter by developing an automatic leveling and alignment system. By measuring four NTPs, the leveling uncertainty (2σ) is reduced by 29.4% to about 0.12 mm, compared with that without NTPs.

  10. Systems Engineering Applied to the Development of a Wave Energy Farm.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, Jesse D.; Bull, Diana L.; Costello, Ronan Patrick

    A motivation for undertaking this stakeholder requirements analysis and Systems Engineering exercise is to document the requirements for successful wave energy farms to facilitate better design and better design assessments. A difficulty in wave energy technology development is the absence to date of a verifiable minimum viable product against which the merits of new products might be measured. A consequence of this absence is that technology development progress, technology value, and technology funding have largely been measured, associated with, and driven by technology readiness, measured in technology readiness levels (TRLs). Originating primarily from the space and defense industries, TRLs focusmore » on procedural implementation of technology developments of large and complex engineering projects, where cost is neither mission critical nor a key design driver. The key deficiency with the TRL approach in the context of wave energy conversion is that WEC technology development has been too focused on commercial readiness and not enough on the stakeholder requirements and particularly economic viability required for market entry.« less

  11. A Weight Comparison of Several Attitude Controls for Satellites

    NASA Technical Reports Server (NTRS)

    Adams, James J.; Chilton, Robert G.

    1959-01-01

    A brief theoretical study has been made for the purpose for estimating and comparing the weight of three different types of controls that can be used to change the attitude of a satellite. The three types of controls are jet reaction, inertia wheel, and a magnetic bar which interacts with the magnetic field of the earth. An idealized task which imposed severe requirements on the angular motion of the satellite was used as the basis for comparison. The results showed that a control for one axis can be devised which will weigh less than 1 percent of the total weight of the satellite. The inertia-wheel system offers weight-saving possibilities if a large number of cycles of operation are required, whereas the jet system would be preferred if a limited number of cycles are required. The magnetic-bar control requires such a large magnet that it is impractical for the example application but might be of value for supplying small trimming moments about certain axes.

  12. An Approach to Building a Traceability Tool for Software Development

    NASA Technical Reports Server (NTRS)

    Delgado, Nelly; Watson, Tom

    1997-01-01

    It is difficult in a large, complex computer program to ensure that it meets the specified requirements. As the program evolves over time, a11 program constraints originally elicited during the requirements phase must be maintained. In addition, during the life cycle of the program, requirements typically change and the program must consistently reflect those changes. Imagine the following scenario. Company X wants to develop a system to automate its assembly line. With such a large system, there are many different stakeholders, e.g., managers, experts such as industrial and mechanical engineers, and end-users. Requirements would be elicited from all of the stake holders involved in the system with each stakeholder contributing their point of view to the requirements. For example, some of the requirements provided by an industrial engineer may concern the movement of parts through the assembly line. A point of view provided by the electrical engineer may be reflected in constraints concerning maximum power usage. End-users may be concerned with comfort and safety issues, whereas managers are concerned with the efficiency of the operation. With so many points of view affecting the requirements, it is difficult to manage them, communicate information to relevant stakeholders. and it is likely that conflicts in the requirements will arise. In the coding process, the implementors will make additional assumptions and interpretations on the design and the requirements of the system. During any stage of development, stakeholders may request that a requirement be added or changed. In such a dynamic environment, it is difficult to guarantee that the system will preserve the current set of requirements. Tracing, the mapping between objects in the artifacts of the system being developed, addresses this issue. Artifacts encompass documents such as the system definition, interview transcripts, memoranda, the software requirements specification, user's manuals, the functional specifications, design reports, and system code. Tracing helps 1) validate system features against, the requirement specification, 2) identify error sources and, most importantly, 3) manage change. With so many people involved in the development of the system, it becomes necessary to identify the reasons behind the design requirements or the implementation decisions. This paper is concerned with an approach that maps documents to constraints that capture properties of and relationships between the objects being modeled by the program. Section 2 provides the reader with a background on traceability tools. Section 3 gives a brief description of the context monitoring system on which the approach suggested in this paper is based. Section 4 presents an overview of our approach to providing traceability. The last section presents our future direction of research.

  13. Metis Hub: The Development of an Intuitive Project Planning System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McConnell, Rachael M.; Lawrence Livermore National Lab.

    2015-08-26

    The goal is to develop an intuitive, dynamic, and consistent interface for the Metis Planning System by combining user requirements and human engineering concepts. The system is largely based upon existing systems so some tools already have working models that we can follow. However, the web-based interface is completely new.

  14. Nonlinear finite element formulation for the large displacement analysis in multibody system dynamics

    NASA Technical Reports Server (NTRS)

    Rismantab-Sany, J.; Chang, B.; Shabana, A. A.

    1989-01-01

    A total Lagrangian finite element formulation for the deformable bodies in multibody mechanical systems that undergo finite relative rotations is developed. The deformable bodies are discretized using finite element methods. The shape functions that are used to describe the displacement field are required to include the rigid body modes that describe only large translational displacements. This does not impose any limitations on the technique because most commonly used shape functions satisfy this requirement. The configuration of an element is defined using four sets of coordinate systems: Body, Element, Intermediate element, Global. The body coordinate system serves as a unique standard for the assembly of the elements forming the deformable body. The element coordinate system is rigidly attached to the element and therefore it translates and rotates with the element. The intermediate element coordinate system, whose axes are initially parallel to the element axes, has an origin which is rigidly attached to the origin of the body coordinate system and is used to conveniently describe the configuration of the element in undeformed state with respect to the body coordinate system.

  15. NASA's Advanced Life Support Systems Human-Rated Test Facility

    NASA Technical Reports Server (NTRS)

    Henninger, D. L.; Tri, T. O.; Packham, N. J.

    1996-01-01

    Future NASA missions to explore the solar system will be long-duration missions, requiring human life support systems which must operate with very high reliability over long periods of time. Such systems must be highly regenerative, requiring minimum resupply, to enable the crews to be largely self-sufficient. These regenerative life support systems will use a combination of higher plants, microorganisms, and physicochemical processes to recycle air and water, produce food, and process wastes. A key step in the development of these systems is establishment of a human-rated test facility specifically tailored to evaluation of closed, regenerative life supports systems--one in which long-duration, large-scale testing involving human test crews can be performed. Construction of such a facility, the Advanced Life Support Program's (ALS) Human-Rated Test Facility (HRTF), has begun at NASA's Johnson Space Center, and definition of systems and development of initial outfitting concepts for the facility are underway. This paper will provide an overview of the HRTF project plan, an explanation of baseline configurations, and descriptive illustrations of facility outfitting concepts.

  16. XM1 Gunnery Training and Aptitude Requirements Analyses

    DTIC Science & Technology

    1981-02-01

    of the XML tank weapons system. Army materiel systems such as the XK tank are initiated, developed, de - ployed, supported, modified and disposed in...Analysis ( TASA ) to satisfy the FEA requirement. Users of the TASA at the Armor School were uniformly critical of the work. Generally described as...inaccurate, incomplete and to a large extent, obsolete the TASA failed to provide the information necessary for addressing the concerns of future operators

  17. Magnetic Sensors with Picotesla Magnetic Field Sensitivity at Room Temperature

    DTIC Science & Technology

    2008-06-01

    such small fields require cryogenic cooling such as SQUID sensors, require sophisticated detection systems such as atomic magnetometers and fluxgate ... magnetometers , or have large size and poor low frequency performance such as coil systems. [3-7] The minimum detectable field (the field noise times...Kingdon, "Development of a Combined EMI/ Magnetometer Sensor for UXO Detection," Proc. Symposium on the Applications of Geophysics to Environmental and

  18. Design of an airborne lidar for stratospheric aerosol measurements

    NASA Technical Reports Server (NTRS)

    Evans, W. E.

    1977-01-01

    A modular, multiple-telescope receiving concept is developed to gain a relatively large receiver collection aperture without requiring extensive modifications to the aircraft. This concept, together with the choice of a specific photodetector, signal processing, and data recording system capable of maintaining approximately 1% precision over the required large signal amplitude range, is found to be common to all of the options. It is recommended that development of the lidar begin by more detailed definition of solutions to these important common signal detection and recording problems.

  19. Technology Challenges and Opportunities for Very Large In-Space Structural Systems

    NASA Technical Reports Server (NTRS)

    Belvin, W. Keith; Dorsey, John T.; Watson, Judith J.

    2009-01-01

    Space solar power satellites and other large space systems will require creative and innovative concepts in order to achieve economically viable designs. The mass and volume constraints of current and planned launch vehicles necessitate highly efficient structural systems be developed. In addition, modularity and in-space deployment/construction will be enabling design attributes. While current space systems allocate nearly 20 percent of the mass to the primary structure, the very large space systems of the future must overcome subsystem mass allocations by achieving a level of functional integration not yet realized. A proposed building block approach with two phases is presented to achieve near-term solar power satellite risk reduction with accompanying long-term technology advances. This paper reviews the current challenges of launching and building very large space systems from a structures and materials perspective utilizing recent experience. Promising technology advances anticipated in the coming decades in modularity, material systems, structural concepts, and in-space operations are presented. It is shown that, together, the current challenges and future advances in very large in-space structural systems may provide the technology pull/push necessary to make solar power satellite systems more technically and economically feasible.

  20. Simultaneous analysis of large INTEGRAL/SPI1 datasets: Optimizing the computation of the solution and its variance using sparse matrix algorithms

    NASA Astrophysics Data System (ADS)

    Bouchet, L.; Amestoy, P.; Buttari, A.; Rouet, F.-H.; Chauvin, M.

    2013-02-01

    Nowadays, analyzing and reducing the ever larger astronomical datasets is becoming a crucial challenge, especially for long cumulated observation times. The INTEGRAL/SPI X/γ-ray spectrometer is an instrument for which it is essential to process many exposures at the same time in order to increase the low signal-to-noise ratio of the weakest sources. In this context, the conventional methods for data reduction are inefficient and sometimes not feasible at all. Processing several years of data simultaneously requires computing not only the solution of a large system of equations, but also the associated uncertainties. We aim at reducing the computation time and the memory usage. Since the SPI transfer function is sparse, we have used some popular methods for the solution of large sparse linear systems; we briefly review these methods. We use the Multifrontal Massively Parallel Solver (MUMPS) to compute the solution of the system of equations. We also need to compute the variance of the solution, which amounts to computing selected entries of the inverse of the sparse matrix corresponding to our linear system. This can be achieved through one of the latest features of the MUMPS software that has been partly motivated by this work. In this paper we provide a brief presentation of this feature and evaluate its effectiveness on astrophysical problems requiring the processing of large datasets simultaneously, such as the study of the entire emission of the Galaxy. We used these algorithms to solve the large sparse systems arising from SPI data processing and to obtain both their solutions and the associated variances. In conclusion, thanks to these newly developed tools, processing large datasets arising from SPI is now feasible with both a reasonable execution time and a low memory usage.

  1. Big questions, big science: meeting the challenges of global ecology.

    PubMed

    Schimel, David; Keller, Michael

    2015-04-01

    Ecologists are increasingly tackling questions that require significant infrastucture, large experiments, networks of observations, and complex data and computation. Key hypotheses in ecology increasingly require more investment, and larger data sets to be tested than can be collected by a single investigator's or s group of investigator's labs, sustained for longer than a typical grant. Large-scale projects are expensive, so their scientific return on the investment has to justify the opportunity cost-the science foregone because resources were expended on a large project rather than supporting a number of individual projects. In addition, their management must be accountable and efficient in the use of significant resources, requiring the use of formal systems engineering and project management to mitigate risk of failure. Mapping the scientific method into formal project management requires both scientists able to work in the context, and a project implementation team sensitive to the unique requirements of ecology. Sponsoring agencies, under pressure from external and internal forces, experience many pressures that push them towards counterproductive project management but a scientific community aware and experienced in large project science can mitigate these tendencies. For big ecology to result in great science, ecologists must become informed, aware and engaged in the advocacy and governance of large ecological projects.

  2. An advanced actuator for high-performance slewing

    NASA Technical Reports Server (NTRS)

    Downer, James; Eisenhaure, David; Hockney, Richard

    1988-01-01

    A conceptual design for an advanced momentum exchange actuator for application to spacecraft slewing is described. The particular concept is a magnetically-suspended, magnetically gimballed Control Moment Gyro (CMG). A scissored pair of these devices is sized to provide the torque and angular momentum capacity required to reorient a large spacecraft through large angle maneuvers. The concept described utilizes a composite material rotor to achieve the high momentum and energy densities to minimize system mass, an advanced superconducting magnetic suspension system to minimize system weight and power consumption. The magnetic suspension system is also capable of allowing for large angle gimballing of the rotor, thus eliminating the mass and reliability penalties attendant to conventional gimbals. Descriptions of the various subelement designs are included along with the necessary system sizing formulation and material.

  3. Comparison of an algebraic multigrid algorithm to two iterative solvers used for modeling ground water flow and transport

    USGS Publications Warehouse

    Detwiler, R.L.; Mehl, S.; Rajaram, H.; Cheung, W.W.

    2002-01-01

    Numerical solution of large-scale ground water flow and transport problems is often constrained by the convergence behavior of the iterative solvers used to solve the resulting systems of equations. We demonstrate the ability of an algebraic multigrid algorithm (AMG) to efficiently solve the large, sparse systems of equations that result from computational models of ground water flow and transport in large and complex domains. Unlike geometric multigrid methods, this algorithm is applicable to problems in complex flow geometries, such as those encountered in pore-scale modeling of two-phase flow and transport. We integrated AMG into MODFLOW 2000 to compare two- and three-dimensional flow simulations using AMG to simulations using PCG2, a preconditioned conjugate gradient solver that uses the modified incomplete Cholesky preconditioner and is included with MODFLOW 2000. CPU times required for convergence with AMG were up to 140 times faster than those for PCG2. The cost of this increased speed was up to a nine-fold increase in required random access memory (RAM) for the three-dimensional problems and up to a four-fold increase in required RAM for the two-dimensional problems. We also compared two-dimensional numerical simulations of steady-state transport using AMG and the generalized minimum residual method with an incomplete LU-decomposition preconditioner. For these transport simulations, AMG yielded increased speeds of up to 17 times with only a 20% increase in required RAM. The ability of AMG to solve flow and transport problems in large, complex flow systems and its ready availability make it an ideal solver for use in both field-scale and pore-scale modeling.

  4. Development and Validation of High Precision Thermal, Mechanical, and Optical Models for the Space Interferometry Mission

    NASA Technical Reports Server (NTRS)

    Lindensmith, Chris A.; Briggs, H. Clark; Beregovski, Yuri; Feria, V. Alfonso; Goullioud, Renaud; Gursel, Yekta; Hahn, Inseob; Kinsella, Gary; Orzewalla, Matthew; Phillips, Charles

    2006-01-01

    SIM Planetquest (SIM) is a large optical interferometer for making microarcsecond measurements of the positions of stars, and to detect Earth-sized planets around nearby stars. To achieve this precision, SIM requires stability of optical components to tens of picometers per hour. The combination of SIM s large size (9 meter baseline) and the high stability requirement makes it difficult and costly to measure all aspects of system performance on the ground. To reduce risks, costs and to allow for a design with fewer intermediate testing stages, the SIM project is developing an integrated thermal, mechanical and optical modeling process that will allow predictions of the system performance to be made at the required high precision. This modeling process uses commercial, off-the-shelf tools and has been validated against experimental results at the precision of the SIM performance requirements. This paper presents the description of the model development, some of the models, and their validation in the Thermo-Opto-Mechanical (TOM3) testbed which includes full scale brassboard optical components and the metrology to test them at the SIM performance requirement levels.

  5. Investigation of Propulsion System Requirements for Spartan Lite

    NASA Technical Reports Server (NTRS)

    Urban, Mike; Gruner, Timothy; Morrissey, James; Sneiderman, Gary

    1998-01-01

    This paper discusses the (chemical or electric) propulsion system requirements necessary to increase the Spartan Lite science mission lifetime to over a year. Spartan Lite is an extremely low-cost (less than 10 M) spacecraft bus being developed at the NASA Goddard Space Flight Center to accommodate sounding rocket class (40 W, 45 kg, 35 cm dia by 1 m length) payloads. While Spartan Lite is compatible with expendable launch vehicles, most missions are expected to be tertiary payloads deployed by. the Space Shuttle. To achieve a one year or longer mission life from typical Shuttle orbits, some form of propulsion system is required. Chemical propulsion systems (characterized by high thrust impulsive maneuvers) and electrical propulsion systems (characterized by low-thrust long duration maneuvers and the additional requirement for electrical power) are discussed. The performance of the Spartan Lite attitude control system in the presence of large disturbance torques is evaluated using the Trectops(Tm) dynamic simulator. This paper discusses the performance goals and resource constraints for candidate Spartan Lite propulsion systems and uses them to specify quantitative requirements against which the systems are evaluated.

  6. From Science To Design: Systems Engineering For The Lsst

    NASA Astrophysics Data System (ADS)

    Claver, Chuck F.; Axelrod, T.; Fouts, K.; Kantor, J.; Nordby, M.; Sebag, J.; LSST Collaboration

    2009-01-01

    The LSST is a universal-purpose survey telescope that will address scores of scientific missions. To assist the technical teams to convergence to a specific engineering design, the LSST Science Requirements Document (SRD) selects four stressing principle scientific missions: 1) Constraining Dark Matter and Dark Energy; 2) taking an Inventory of the Solar System; 3) Exploring the Transient Optical Sky; and 4) mapping the Milky Way. From these 4 missions the SRD specifies the needed requirements for single images and the full 10 year survey that enables a wide range of science beyond the 4 principle missions. Through optical design and analysis, operations simulation, and throughput modeling the systems engineering effort in the LSST has largely focused on taking the SRD specifications and deriving system functional requirements that define the system design. A Model Based Systems Engineering approach with SysML is used to manage the flow down of requirements from science to system function to sub-system. The rigor of requirements flow and management assists the LSST in keeping the overall scope, hence budget and schedule, under control.

  7. A technology program for the development of the large deployable reflector for space based astronomy

    NASA Technical Reports Server (NTRS)

    Kiya, M. K.; Gilbreath, W. P.; Swanson, P. N.

    1982-01-01

    Technologies for the development of the Large Deployable Reflector (LDR), a NASA project for the 1990's, for infrared and submillimeter astronomy are presented. The proposed LDR is a 10-30 diameter spaceborne observatory operating in the spectral region from 30 microns to one millimeter, where ground observations are nearly impossible. Scientific rationales for such a system include the study of ancient signals from galaxies at the edge of the universe, the study of star formation, and the observation of fluctuations in the cosmic background radiation. System requirements include the ability to observe faint objects at large distances and to map molecular clouds and H II regions. From these requirements, mass, photon noise, and tolerance budgets are developed. A strawman concept is established, and some alternate concepts are considered, but research is still necessary in the areas of segment, optical control, and instrument technologies.

  8. Low-cost space-varying FIR filter architecture for computational imaging systems

    NASA Astrophysics Data System (ADS)

    Feng, Guotong; Shoaib, Mohammed; Schwartz, Edward L.; Dirk Robinson, M.

    2010-01-01

    Recent research demonstrates the advantage of designing electro-optical imaging systems by jointly optimizing the optical and digital subsystems. The optical systems designed using this joint approach intentionally introduce large and often space-varying optical aberrations that produce blurry optical images. Digital sharpening restores reduced contrast due to these intentional optical aberrations. Computational imaging systems designed in this fashion have several advantages including extended depth-of-field, lower system costs, and improved low-light performance. Currently, most consumer imaging systems lack the necessary computational resources to compensate for these optical systems with large aberrations in the digital processor. Hence, the exploitation of the advantages of the jointly designed computational imaging system requires low-complexity algorithms enabling space-varying sharpening. In this paper, we describe a low-cost algorithmic framework and associated hardware enabling the space-varying finite impulse response (FIR) sharpening required to restore largely aberrated optical images. Our framework leverages the space-varying properties of optical images formed using rotationally-symmetric optical lens elements. First, we describe an approach to leverage the rotational symmetry of the point spread function (PSF) about the optical axis allowing computational savings. Second, we employ a specially designed bank of sharpening filters tuned to the specific radial variation common to optical aberrations. We evaluate the computational efficiency and image quality achieved by using this low-cost space-varying FIR filter architecture.

  9. Legacy Phosphorus Effect and Need to Re-calibrate Soil Test P Methods for Organic Crop Production.

    NASA Astrophysics Data System (ADS)

    Dao, Thanh H.; Schomberg, Harry H.; Cavigelli, Michel A.

    2015-04-01

    Phosphorus (P) is a required nutrient for the normal development and growth of plants and supplemental P is needed in most cultivated soils. Large inputs of cover crop residues and nutrient-rich animal manure are added to supply needed nutrients to promote the optimal production of organic grain crops and forages. The effects of crop rotations and tillage management of the near-surface zone on labile phosphorus (P) forms were studied in soil under conventional and organic crop management systems in the mid-Atlantic region of the U.S. after 18 years due to the increased interest in these alternative systems. Soil nutrient surpluses likely caused by low grain yields resulted in large pools of exchangeable phosphate-P and equally large pools of enzyme-labile organic P (Po) in soils under organic management. In addition, the difference in the P loading rates between the conventional and organic treatments as guided by routine soil test recommendations suggested that overestimating plant P requirements contributed to soil P surpluses because routine soil testing procedures did not account for the presence and size of the soil enzyme-labile Po pool. The effect of large P additions is long-lasting as they continued to contribute to elevated soil total bioactive P concentrations 12 or more years later. Consequently, accurate estimates of crop P requirements, P turnover in soil, and real-time plant and soil sensing systems are critical considerations to optimally manage manure-derived nutrients in organic crop production.

  10. A superconducting large-angle magnetic suspension

    NASA Technical Reports Server (NTRS)

    Downer, James R.; Anastas, George V., Jr.; Bushko, Dariusz A.; Flynn, Frederick J.; Goldie, James H.; Gondhalekar, Vijay; Hawkey, Timothy J.; Hockney, Richard L.; Torti, Richard P.

    1992-01-01

    SatCon Technology Corporation has completed a Small Business Innovation Research (SBIR) Phase 2 program to develop a Superconducting Large-Angle Magnetic Suspension (LAMS) for the NASA Langley Research Center. The Superconducting LAMS was a hardware demonstration of the control technology required to develop an advanced momentum exchange effector. The Phase 2 research was directed toward the demonstration for the key technology required for the advanced concept CMG, the controller. The Phase 2 hardware consists of a superconducting solenoid ('source coils') suspended within an array of nonsuperconducting coils ('control coils'), a five-degree-of-freedom positioning sensing system, switching power amplifiers, and a digital control system. The results demonstrated the feasibility of suspending the source coil. Gimballing (pointing the axis of the source coil) was demonstrated over a limited range. With further development of the rotation sensing system, enhanced angular freedom should be possible.

  11. A superconducting large-angle magnetic suspension

    NASA Astrophysics Data System (ADS)

    Downer, James R.; Anastas, George V., Jr.; Bushko, Dariusz A.; Flynn, Frederick J.; Goldie, James H.; Gondhalekar, Vijay; Hawkey, Timothy J.; Hockney, Richard L.; Torti, Richard P.

    1992-12-01

    SatCon Technology Corporation has completed a Small Business Innovation Research (SBIR) Phase 2 program to develop a Superconducting Large-Angle Magnetic Suspension (LAMS) for the NASA Langley Research Center. The Superconducting LAMS was a hardware demonstration of the control technology required to develop an advanced momentum exchange effector. The Phase 2 research was directed toward the demonstration for the key technology required for the advanced concept CMG, the controller. The Phase 2 hardware consists of a superconducting solenoid ('source coils') suspended within an array of nonsuperconducting coils ('control coils'), a five-degree-of-freedom positioning sensing system, switching power amplifiers, and a digital control system. The results demonstrated the feasibility of suspending the source coil. Gimballing (pointing the axis of the source coil) was demonstrated over a limited range. With further development of the rotation sensing system, enhanced angular freedom should be possible.

  12. Exploring model based engineering for large telescopes: getting started with descriptive models

    NASA Astrophysics Data System (ADS)

    Karban, R.; Zamparelli, M.; Bauvir, B.; Koehler, B.; Noethe, L.; Balestra, A.

    2008-07-01

    Large telescopes pose a continuous challenge to systems engineering due to their complexity in terms of requirements, operational modes, long duty lifetime, interfaces and number of components. A multitude of decisions must be taken throughout the life cycle of a new system, and a prime means of coping with complexity and uncertainty is using models as one decision aid. The potential of descriptive models based on the OMG Systems Modeling Language (OMG SysMLTM) is examined in different areas: building a comprehensive model serves as the basis for subsequent activities of soliciting and review for requirements, analysis and design alike. Furthermore a model is an effective communication instrument against misinterpretation pitfalls which are typical of cross disciplinary activities when using natural language only or free-format diagrams. Modeling the essential characteristics of the system, like interfaces, system structure and its behavior, are important system level issues which are addressed. Also shown is how to use a model as an analysis tool to describe the relationships among disturbances, opto-mechanical effects and control decisions and to refine the control use cases. Considerations on the scalability of the model structure and organization, its impact on the development process, the relation to document-centric structures, style and usage guidelines and the required tool chain are presented.

  13. Robust scalable stabilisability conditions for large-scale heterogeneous multi-agent systems with uncertain nonlinear interactions: towards a distributed computing architecture

    NASA Astrophysics Data System (ADS)

    Manfredi, Sabato

    2016-06-01

    Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.

  14. Large Bore Powder Gun Qualification (U)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rabern, Donald A.; Valdiviez, Robert

    A Large Bore Powder Gun (LBPG) is being designed to enable experimentalists to characterize material behavior outside the capabilities of the NNSS JASPER and LANL TA-55 PF-4 guns. The combination of these three guns will create a capability to conduct impact experiments over a wide range of pressures and shock profiles. The Large Bore Powder Gun will be fielded at the Nevada National Security Site (NNSS) U1a Complex. The Complex is nearly 1000 ft below ground with dedicated drifts for testing, instrumentation, and post-shot entombment. To ensure the reliability, safety, and performance of the LBPG, a qualification plan has beenmore » established and documented here. Requirements for the LBPG have been established and documented in WE-14-TR-0065 U A, Large Bore Powder Gun Customer Requirements. The document includes the requirements for the physics experiments, the gun and confinement systems, and operations at NNSS. A detailed description of the requirements is established in that document and is referred to and quoted throughout this document. Two Gun and Confinement Systems will be fielded. The Prototype Gun will be used primarily to characterize the gun and confinement performance and be the primary platform for qualification actions. This gun will also be used to investigate and qualify target and diagnostic modifications through the life of the program (U1a.104 Drift). An identical gun, the Physics Gun, will be fielded for confirmatory and Pu experiments (U1a.102D Drift). Both guns will be qualified for operation. The Gun and Confinement System design will be qualified through analysis, inspection, and testing using the Prototype Gun for the majority of process. The Physics Gun will be qualified through inspection and a limited number of qualification tests to ensure performance and behavior equivalent to the Prototype gun. Figure 1.1 shows the partial configuration of U1a and the locations of the Prototype and Physics Gun/Confinement Systems.« less

  15. A Design Rationale Capture Using REMAP/MM

    DTIC Science & Technology

    1994-06-01

    company-wide down-sizing, the power company has determined that an automated service order processing system is the most economical solution. This new...service order processing system for a large power company can easily be 37 led. A system of this complexity would typically require three to five years

  16. Maxi CAI with a Micro.

    ERIC Educational Resources Information Center

    Gerhold, George; And Others

    This paper describes an effective microprocessor-based CAI system which has been repeatedly tested by a large number of students and edited accordingly. Tasks not suitable for microprocessor based systems (authoring, testing, and debugging) were handled on larger multi-terminal systems. This approach requires that the CAI language used on the…

  17. On the management and processing of earth resources information

    NASA Technical Reports Server (NTRS)

    Skinner, C. W.; Gonzalez, R. C.

    1973-01-01

    The basic concepts of a recently completed large-scale earth resources information system plan are reported. Attention is focused throughout the paper on the information management and processing requirements. After the development of the principal system concepts, a model system for implementation at the state level is discussed.

  18. Optical CDMA components requirements

    NASA Astrophysics Data System (ADS)

    Chan, James K.

    1998-08-01

    Optical CDMA is a complementary multiple access technology to WDMA. Optical CDMA potentially provides a large number of virtual optical channels for IXC, LEC and CLEC or supports a large number of high-speed users in LAN. In a network, it provides asynchronous, multi-rate, multi-user communication with network scalability, re-configurability (bandwidth on demand), and network security (provided by inherent CDMA coding). However, optical CDMA technology is less mature in comparison to WDMA. The components requirements are also different from WDMA. We have demonstrated a video transport/switching system over a distance of 40 Km using discrete optical components in our laboratory. We are currently pursuing PIC implementation. In this paper, we will describe the optical CDMA concept/features, the demonstration system, and the requirements of some critical optical components such as broadband optical source, broadband optical amplifier, spectral spreading/de- spreading, and fixed/programmable mask.

  19. A Data Analysis Expert System For Large Established Distributed Databases

    NASA Astrophysics Data System (ADS)

    Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick

    1987-05-01

    The purpose of this work is to analyze the applicability of artificial intelligence techniques for developing a user-friendly, parallel interface to large isolated, incompatible NASA databases for the purpose of assisting the management decision process. To carry out this work, a survey was conducted to establish the data access requirements of several key NASA user groups. In addition, current NASA database access methods were evaluated. The results of this work are presented in the form of a design for a natural language database interface system, called the Deductively Augmented NASA Management Decision Support System (DANMDS). This design is feasible principally because of recently announced commercial hardware and software product developments which allow cross-vendor compatibility. The goal of the DANMDS system is commensurate with the central dilemma confronting most large companies and institutions in America, the retrieval of information from large, established, incompatible database systems. The DANMDS system implementation would represent a significant first step toward this problem's resolution.

  20. Structural performance analysis and redesign

    NASA Technical Reports Server (NTRS)

    Whetstone, W. D.

    1978-01-01

    Program performs stress buckling and vibrational analysis of large, linear, finite-element systems in excess of 50,000 degrees of freedom. Cost, execution time, and storage requirements are kept reasonable through use of sparse matrix solution techniques, and other computational and data management procedures designed for problems of very large size.

  1. Reducing work zone crashes by using vehicle's flashers as a warning sign : final report

    DOT National Transportation Integrated Search

    2009-01-01

    Rural two-lane highways constitute a large percentage of the highway system in Kansas. Preserving, expending, : and enhancing these highways require the set-up of a large number of one-lane, two-way work zones where traffic : safety has been a severe...

  2. Llamas: Large-area microphone arrays and sensing systems

    NASA Astrophysics Data System (ADS)

    Sanz-Robinson, Josue

    Large-area electronics (LAE) provides a platform to build sensing systems, based on distributing large numbers of densely spaced sensors over a physically-expansive space. Due to their flexible, "wallpaper-like" form factor, these systems can be seamlessly deployed in everyday spaces. They go beyond just supplying sensor readings, but rather they aim to transform the wealth of data from these sensors into actionable inferences about our physical environment. This requires vertically integrated systems that span the entirety of the signal processing chain, including transducers and devices, circuits, and signal processing algorithms. To this end we develop hybrid LAE / CMOS systems, which exploit the complementary strengths of LAE, enabling spatially distributed sensors, and CMOS ICs, providing computational capacity for signal processing. To explore the development of hybrid sensing systems, based on vertical integration across the signal processing chain, we focus on two main drivers: (1) thin-film diodes, and (2) microphone arrays for blind source separation: 1) Thin-film diodes are a key building block for many applications, such as RFID tags or power transfer over non-contact inductive links, which require rectifiers for AC-to-DC conversion. We developed hybrid amorphous / nanocrystalline silicon diodes, which are fabricated at low temperatures (<200 °C) to be compatible with processing on plastic, and have high current densities (5 A/cm2 at 1 V) and high frequency operation (cutoff frequency of 110 MHz). 2) We designed a system for separating the voices of multiple simultaneous speakers, which can ultimately be fed to a voice-command recognition engine for controlling electronic systems. On a device level, we developed flexible PVDF microphones, which were used to create a large-area microphone array. On a circuit level we developed localized a-Si TFT amplifiers, and a custom CMOS IC, for system control, sensor readout and digitization. On a signal processing level we developed an algorithm for blind source separation in a real, reverberant room, based on beamforming and binary masking. It requires no knowledge about the location of the speakers or microphones. Instead, it uses cluster analysis techniques to determine the time delays for beamforming; thus, adapting to the unique acoustic environment of the room.

  3. Spaceport Command and Control System Software Development

    NASA Technical Reports Server (NTRS)

    Glasser, Abraham

    2017-01-01

    The Spaceport Command and Control System (SCCS) is the National Aeronautics and Space Administration's (NASA) launch control system for the Orion capsule and Space Launch System, the next generation manned rocket currently in development. This large system requires a large amount of intensive testing that will properly measure the capabilities of the system. Automating the test procedures would save the project money from human labor costs, as well as making the testing process more efficient. Therefore, the Exploration Systems Division (formerly the Electrical Engineering Division) at Kennedy Space Center (KSC) has recruited interns for the past two years to work alongside full-time engineers to develop these automated tests, as well as innovate upon the current automation process.

  4. Geostationary platform systems concepts definition follow-on study. Volume 2A: Technical Task 2 LSST special emphasis

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The results of the Large Space Systems Technology special emphasis task are presented. The task was an analysis of structural requirements deriving from the initial Phase A Operational Geostationary Platform study.

  5. Synchronous Energy Technology

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The synchronous technology requirements for large space power systems are summarized. A variety of technology areas including photovoltaics, thermal management, and energy storage, and power management are addressed.

  6. Requirements for guidelines systems: implementation challenges and lessons from existing software-engineering efforts.

    PubMed

    Shah, Hemant; Allard, Raymond D; Enberg, Robert; Krishnan, Ganesh; Williams, Patricia; Nadkarni, Prakash M

    2012-03-09

    A large body of work in the clinical guidelines field has identified requirements for guideline systems, but there are formidable challenges in translating such requirements into production-quality systems that can be used in routine patient care. Detailed analysis of requirements from an implementation perspective can be useful in helping define sub-requirements to the point where they are implementable. Further, additional requirements emerge as a result of such analysis. During such an analysis, study of examples of existing, software-engineering efforts in non-biomedical fields can provide useful signposts to the implementer of a clinical guideline system. In addition to requirements described by guideline-system authors, comparative reviews of such systems, and publications discussing information needs for guideline systems and clinical decision support systems in general, we have incorporated additional requirements related to production-system robustness and functionality from publications in the business workflow domain, in addition to drawing on our own experience in the development of the Proteus guideline system (http://proteme.org). The sub-requirements are discussed by conveniently grouping them into the categories used by the review of Isern and Moreno 2008. We cite previous work under each category and then provide sub-requirements under each category, and provide example of similar work in software-engineering efforts that have addressed a similar problem in a non-biomedical context. When analyzing requirements from the implementation viewpoint, knowledge of successes and failures in related software-engineering efforts can guide implementers in the choice of effective design and development strategies.

  7. Requirements for guidelines systems: implementation challenges and lessons from existing software-engineering efforts

    PubMed Central

    2012-01-01

    Background A large body of work in the clinical guidelines field has identified requirements for guideline systems, but there are formidable challenges in translating such requirements into production-quality systems that can be used in routine patient care. Detailed analysis of requirements from an implementation perspective can be useful in helping define sub-requirements to the point where they are implementable. Further, additional requirements emerge as a result of such analysis. During such an analysis, study of examples of existing, software-engineering efforts in non-biomedical fields can provide useful signposts to the implementer of a clinical guideline system. Methods In addition to requirements described by guideline-system authors, comparative reviews of such systems, and publications discussing information needs for guideline systems and clinical decision support systems in general, we have incorporated additional requirements related to production-system robustness and functionality from publications in the business workflow domain, in addition to drawing on our own experience in the development of the Proteus guideline system (http://proteme.org). Results The sub-requirements are discussed by conveniently grouping them into the categories used by the review of Isern and Moreno 2008. We cite previous work under each category and then provide sub-requirements under each category, and provide example of similar work in software-engineering efforts that have addressed a similar problem in a non-biomedical context. Conclusions When analyzing requirements from the implementation viewpoint, knowledge of successes and failures in related software-engineering efforts can guide implementers in the choice of effective design and development strategies. PMID:22405400

  8. Parameter identification of civil engineering structures

    NASA Technical Reports Server (NTRS)

    Juang, J. N.; Sun, C. T.

    1980-01-01

    This paper concerns the development of an identification method required in determining structural parameter variations for systems subjected to an extended exposure to the environment. The concept of structural identifiability of a large scale structural system in the absence of damping is presented. Three criteria are established indicating that a large number of system parameters (the coefficient parameters of the differential equations) can be identified by a few actuators and sensors. An eight-bay-fifteen-story frame structure is used as example. A simple model is employed for analyzing the dynamic response of the frame structure.

  9. The application of artificial intelligence techniques to large distributed networks

    NASA Technical Reports Server (NTRS)

    Dubyah, R.; Smith, T. R.; Star, J. L.

    1985-01-01

    Data accessibility and transfer of information, including the land resources information system pilot, are structured as large computer information networks. These pilot efforts include the reduction of the difficulty to find and use data, reducing processing costs, and minimize incompatibility between data sources. Artificial Intelligence (AI) techniques were suggested to achieve these goals. The applicability of certain AI techniques are explored in the context of distributed problem solving systems and the pilot land data system (PLDS). The topics discussed include: PLDS and its data processing requirements, expert systems and PLDS, distributed problem solving systems, AI problem solving paradigms, query processing, and distributed data bases.

  10. Monitoring and Hardware Management for Critical Fusion Plasma Instrumentation

    NASA Astrophysics Data System (ADS)

    Carvalho, Paulo F.; Santos, Bruno; Correia, Miguel; Combo, Álvaro M.; Rodrigues, AntÓnio P.; Pereira, Rita C.; Fernandes, Ana; Cruz, Nuno; Sousa, Jorge; Carvalho, Bernardo B.; Batista, AntÓnio J. N.; Correia, Carlos M. B. A.; Gonçalves, Bruno

    2018-01-01

    Controlled nuclear fusion aims to obtain energy by particles collision confined inside a nuclear reactor (Tokamak). These ionized particles, heavier isotopes of hydrogen, are the main elements inside of plasma that is kept at high temperatures (millions of Celsius degrees). Due to high temperatures and magnetic confinement, plasma is exposed to several sources of instabilities which require a set of procedures by the control and data acquisition systems throughout fusion experiments processes. Control and data acquisition systems often used in nuclear fusion experiments are based on the Advanced Telecommunication Computer Architecture (AdvancedTCA®) standard introduced by the Peripheral Component Interconnect Industrial Manufacturers Group (PICMG®), to meet the demands of telecommunications that require large amount of data (TB) transportation at high transfer rates (Gb/s), to ensure high availability including features such as reliability, serviceability and redundancy. For efficient plasma control, systems are required to collect large amounts of data, process it, store for later analysis, make critical decisions in real time and provide status reports either from the experience itself or the electronic instrumentation involved. Moreover, systems should also ensure the correct handling of detected anomalies and identified faults, notify the system operator of occurred events, decisions taken to acknowledge and implemented changes. Therefore, for everything to work in compliance with specifications it is required that the instrumentation includes hardware management and monitoring mechanisms for both hardware and software. These mechanisms should check the system status by reading sensors, manage events, update inventory databases with hardware system components in use and maintenance, store collected information, update firmware and installed software modules, configure and handle alarms to detect possible system failures and prevent emergency scenarios occurrences. The goal is to ensure high availability of the system and provide safety operation, experiment security and data validation for the fusion experiment. This work aims to contribute to the joint effort of the IPFN control and data acquisition group to develop a hardware management and monitoring application for control and data acquisition instrumentation especially designed for large scale tokamaks like ITER.

  11. Experiences with Text Mining Large Collections of Unstructured Systems Development Artifacts at JPL

    NASA Technical Reports Server (NTRS)

    Port, Dan; Nikora, Allen; Hihn, Jairus; Huang, LiGuo

    2011-01-01

    Often repositories of systems engineering artifacts at NASA's Jet Propulsion Laboratory (JPL) are so large and poorly structured that they have outgrown our capability to effectively manually process their contents to extract useful information. Sophisticated text mining methods and tools seem a quick, low-effort approach to automating our limited manual efforts. Our experiences of exploring such methods mainly in three areas including historical risk analysis, defect identification based on requirements analysis, and over-time analysis of system anomalies at JPL, have shown that obtaining useful results requires substantial unanticipated efforts - from preprocessing the data to transforming the output for practical applications. We have not observed any quick 'wins' or realized benefit from short-term effort avoidance through automation in this area. Surprisingly we have realized a number of unexpected long-term benefits from the process of applying text mining to our repositories. This paper elaborates some of these benefits and our important lessons learned from the process of preparing and applying text mining to large unstructured system artifacts at JPL aiming to benefit future TM applications in similar problem domains and also in hope for being extended to broader areas of applications.

  12. A Contamination-Free Ultrahigh Precision Formation Flying Method for Micro-, Nano-, and Pico-Satellites with Nanometer Accuracy

    NASA Astrophysics Data System (ADS)

    Bae, Young K.

    2006-01-01

    Formation flying of clusters of micro-, nano- and pico-satellites has been recognized to be more affordable, robust and versatile than building a large monolithic satellite in implementing next generation space missions requiring large apertures or large sample collection areas and sophisticated earth imaging/monitoring. We propose a propellant free, thus contamination free, method that enables ultrahigh precision satellite formation flying with intersatellite distance accuracy of nm (10-9 m) at maximum estimated distances in the order of tens of km. The method is based on ultrahigh precision CW intracavity photon thrusters and tethers. The pushing-out force of the intracavity photon thruster and the pulling-in force of the tether tension between satellites form the basic force structure to stabilize crystalline-like structures of satellites and/or spacecrafts with a relative distance accuracy better than nm. The thrust of the photons can be amplified by up to tens of thousand times by bouncing them between two mirrors located separately on pairing satellites. For example, a 10 W photon thruster, suitable for micro-satellite applications, is theoretically capable of providing thrusts up to mN, and its weight and power consumption are estimated to be several kgs and tens of W, respectively. The dual usage of photon thruster as a precision laser source for the interferometric ranging system further simplifies the system architecture and minimizes the weight and power consumption. The present method does not require propellant, thus provides significant propulsion system mass savings, and is free from propellant exhaust contamination, ideal for missions that require large apertures composed of highly sensitive sensors. The system can be readily scaled down for the nano- and pico-satellite applications.

  13. 75 FR 70604 - Wireless E911 Location Accuracy Requirements

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-18

    ... carriers are unable to recover the substantial cost of constructing a large number of additional cell sites... characteristics, cell site density, overall system technology requirements, etc.) while, in either case, ensuring... the satellites and the handset. The more extensive the tree cover, the greater the difficulty the...

  14. Advanced Hybrid Cooling Loop Technology for High Performance Thermal Management

    DTIC Science & Technology

    2006-06-01

    and Chung, 2003; Estes and Mudawar , 1995]. Because of the pumping pressure and flow rate requirements, such pumped systems require large pumping and...United States, April 24-25, 2003. 8. Estes, K. and Mudawar , I., “Comparison of Two-Phase Electronic Cooling Using Free Jets and Sprays”, Journal of

  15. Capability 9.3 Assembly and Deployment

    NASA Technical Reports Server (NTRS)

    Dorsey, John

    2005-01-01

    Large space systems are required for a range of operational, commercial and scientific missions objectives however, current launch vehicle capacities substantially limit the size of space systems (on-orbit or planetary). Assembly and Deployment is the process of constructing a spacecraft or system from modules which may in turn have been constructed from sub-modules in a hierarchical fashion. In-situ assembly of space exploration vehicles and systems will require a broad range of operational capabilities, including: Component transfer and storage, fluid handling, construction and assembly, test and verification. Efficient execution of these functions will require supporting infrastructure, that can: Receive, store and protect (materials, components, etc.); hold and secure; position, align and control; deploy; connect/disconnect; construct; join; assemble/disassemble; dock/undock; and mate/demate.

  16. Feasibility study for the application of the large format camera as a payload for the Orbiter program

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The large format camera (LFC) designed as a 30 cm focal length cartographic camera system that employs forward motion compensation in order to achieve the full image resolution provided by its 80 degree field angle lens is described. The feasibility of application of the current LFC design to deployment in the orbiter program as the Orbiter Camera Payload System was assessed and the changes that are necessary to meet such a requirement are discussed. Current design and any proposed design changes were evaluated relative to possible future deployment of the LFC on a free flyer vehicle or in a WB-57F. Preliminary mission interface requirements for the LFC are given.

  17. Solar Cell and Array Technology Development for NASA Solar Electric Propulsion Missions

    NASA Technical Reports Server (NTRS)

    Piszczor, Michael; McNatt, Jeremiah; Mercer, Carolyn; Kerslake, Tom; Pappa, Richard

    2012-01-01

    NASA is currently developing advanced solar cell and solar array technologies to support future exploration activities. These advanced photovoltaic technology development efforts are needed to enable very large (multi-hundred kilowatt) power systems that must be compatible with solar electric propulsion (SEP) missions. The technology being developed must address a wide variety of requirements and cover the necessary advances in solar cell, blanket integration, and large solar array structures that are needed for this class of missions. Th is paper will summarize NASA's plans for high power SEP missions, initi al mission studies and power system requirements, plans for advanced photovoltaic technology development, and the status of specific cell and array technology development and testing that have already been conducted.

  18. TRANSITION FROM KINETIC TO MHD BEHAVIOR IN A COLLISIONLESS PLASMA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parashar, Tulasi N.; Matthaeus, William H.; Shay, Michael A.

    The study of kinetic effects in heliospheric plasmas requires representation of dynamics at sub-proton scales, but in most cases the system is driven by magnetohydrodynamic (MHD) activity at larger scales. The latter requirement challenges available computational resources, which raises the question of how large such a system must be to exhibit MHD traits at large scales while kinetic behavior is accurately represented at small scales. Here we study this implied transition from kinetic to MHD-like behavior using particle-in-cell (PIC) simulations, initialized using an Orszag–Tang Vortex. The PIC code treats protons, as well as electrons, kinetically, and we address the questionmore » of interest by examining several different indicators of MHD-like behavior.« less

  19. Range-Gated Metrology: An Ultra-Compact Sensor for Dimensional Stabilization

    NASA Technical Reports Server (NTRS)

    Lay, Oliver P.; Dubovitsky, Serge; Shaddock, Daniel A.; Ware, Brent; Woodruff, Christopher S.

    2008-01-01

    Point-to-point laser metrology systems can be used to stabilize large structures at the nanometer levels required for precision optical systems. Existing sensors are large and intrusive, however, with optical heads that consist of several optical elements and require multiple optical fiber connections. The use of point-to-point laser metrology has therefore been limited to applications where only a few gauges are needed and there is sufficient space to accommodate them. Range-Gated Metrology is a signal processing technique that preserves nanometer-level or better performance while enabling: (1) a greatly simplified optical head - a single fiber optic collimator - that can be made very compact, and (2) a single optical fiber connection that is readily multiplexed. This combination of features means that it will be straightforward and cost-effective to embed tens or hundreds of compact metrology gauges to stabilize a large structure. In this paper we describe the concept behind Range-Gated Metrology, demonstrate the performance in a laboratory environment, and give examples of how such a sensor system might be deployed.

  20. More efficient irrigation may compensate for increases in irrigation water requirements due to climate change in the Mediterranean area

    NASA Astrophysics Data System (ADS)

    Fader, Marianela; Shi, Sinan; von Bloh, Werner; Bondeau, Alberte; Cramer, Wolfgang

    2017-04-01

    Irrigation in the Mediterranean is of vital importance for food security, employment and economic development. We will present a recently published study1 that estimates the current level of water demand for Mediterranean agriculture and simulates the potential impacts of climate change, population growth and transitions to water-saving irrigation and conveyance technologies. The results indicate that, at present, Mediterranean region could save 35% of water by implementing more efficient irrigation and conveyance systems, with large differences in the saving potentials across countries. Under climate change, more efficient irrigation is of vital importance for counteracting increases in irrigation water requirements. The Mediterranean area as a whole might face an increase in gross irrigation requirements between 4% and 18% from climate change alone by the end of the century if irrigation systems and conveyance are not improved. Population growth increases these numbers to 22% and 74%, respectively, affecting mainly the Southern and Eastern Mediterranean. However, improved irrigation technologies and conveyance systems have large water saving potentials, especially in the Eastern Mediterranean. Both the Eastern and the Southern Mediterranean would need around 35% more water than today if they could afford some degree of modernization of irrigation and conveyance systems and benefit from the CO2-fertilization effect. However, in some scenarios water scarcity may constrain the supply of the irrigation water needed in future in Algeria, Libya, Israel, Jordan, Lebanon, Syria, Serbia, Morocco, Tunisia and Spain. In this study, vegetation growth, phenology, agricultural production and irrigation water requirements and withdrawal were simulated with the process-based ecohydrological and agro-ecosystem model LPJmL ("Lund-Potsdam-Jena managed Land") after a large development2 that comprised the improved representation of Mediterranean crops.

  1. Creating a learning organization to help meet the needs of multihospital health systems.

    PubMed

    Ward, Angela; Berensen, Nannette; Daniels, Rowell

    2018-04-01

    The considerations that leaders of multihospital health systems must take into account in developing and implementing initiatives to build and maintain an exceptional pharmacy workforce are described. Significant changes that require constant individual and organizational learning are occurring throughout healthcare and within the profession of pharmacy. These considerations include understanding why it is important to have a succession plan and determining what types of education and training are important to support that plan. Other considerations include strategies for leveraging learners, dealing with a large geographic footprint, adjusting training opportunities to accommodate the ever-evolving demands on pharmacy staffs in terms of skill mix, and determining ways to either budget for or internally develop content for staff development. All of these methods are critically important to ensuring an optimized workforce. Especially for large health systems operating multiple sites across large distances, the use of technology-enabled solutions to provide effective delivery of programming to multiple sites is critical. Commonly used tools include live webinars, live "telepresence" programs, prerecorded programming that is available through an on-demand repository, and computer-based training modules. A learning management system is helpful to assign and document completion of educational requirements, especially those related to regulatory requirements (e.g., controlled substances management, sterile and nonsterile compounding, competency assessment). Creating and sustaining an environment where all pharmacy caregivers feel invested in and connected to ongoing learning is a powerful motivator for performance, engagement, and retention. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  2. Dynamic stability with the disturbance-free payload architecture as applied to the Large UV/Optical/Infrared (LUVOIR) Mission

    NASA Astrophysics Data System (ADS)

    Dewell, Larry D.; Tajdaran, Kiarash; Bell, Raymond M.; Liu, Kuo-Chia; Bolcar, Matthew R.; Sacks, Lia W.; Crooke, Julie A.; Blaurock, Carl

    2017-09-01

    The need for high payload dynamic stability and ultra-stable mechanical systems is an overarching technology need for large space telescopes such as the Large Ultraviolet / Optical / Infrared (LUVOIR) Surveyor. Wavefront error stability of less than 10 picometers RMS of uncorrected system WFE per wavefront control step represents a drastic performance improvement over current space-based telescopes being fielded. Previous studies of similar telescope architectures have shown that passive telescope isolation approaches are hard-pressed to meet dynamic stability requirements and usually involve complex actively-controlled elements and sophisticated metrology. To meet these challenging dynamic stability requirements, an isolation architecture that involves no mechanical contact between telescope and the host spacecraft structure has the potential of delivering this needed performance improvement. One such architecture, previously developed by Lockheed Martin called Disturbance Free Payload (DFP), is applied to and analyzed for LUVOIR. In a noncontact DFP architecture, the payload and spacecraft fly in close proximity, and interact via non-contact actuators to allow precision payload pointing and isolation from spacecraft vibration. Because disturbance isolation through non-contact, vibration isolation down to zero frequency is possible, and high-frequency structural dynamics of passive isolators are not introduced into the system. In this paper, the system-level analysis of a non-contact architecture is presented for LUVOIR, based on requirements that are directly traceable to its science objectives, including astrophysics and the direct imaging of habitable exoplanets. Aspects of architecture and how they contribute to system performance are examined and tailored to the LUVOIR architecture and concept of operation.

  3. openBIS: a flexible framework for managing and analyzing complex data in biology research

    PubMed Central

    2011-01-01

    Background Modern data generation techniques used in distributed systems biology research projects often create datasets of enormous size and diversity. We argue that in order to overcome the challenge of managing those large quantitative datasets and maximise the biological information extracted from them, a sound information system is required. Ease of integration with data analysis pipelines and other computational tools is a key requirement for it. Results We have developed openBIS, an open source software framework for constructing user-friendly, scalable and powerful information systems for data and metadata acquired in biological experiments. openBIS enables users to collect, integrate, share, publish data and to connect to data processing pipelines. This framework can be extended and has been customized for different data types acquired by a range of technologies. Conclusions openBIS is currently being used by several SystemsX.ch and EU projects applying mass spectrometric measurements of metabolites and proteins, High Content Screening, or Next Generation Sequencing technologies. The attributes that make it interesting to a large research community involved in systems biology projects include versatility, simplicity in deployment, scalability to very large data, flexibility to handle any biological data type and extensibility to the needs of any research domain. PMID:22151573

  4. A measurement system for large, complex software programs

    NASA Technical Reports Server (NTRS)

    Rone, Kyle Y.; Olson, Kitty M.; Davis, Nathan E.

    1994-01-01

    This paper describes measurement systems required to forecast, measure, and control activities for large, complex software development and support programs. Initial software cost and quality analysis provides the foundation for meaningful management decisions as a project evolves. In modeling the cost and quality of software systems, the relationship between the functionality, quality, cost, and schedule of the product must be considered. This explicit relationship is dictated by the criticality of the software being developed. This balance between cost and quality is a viable software engineering trade-off throughout the life cycle. Therefore, the ability to accurately estimate the cost and quality of software systems is essential to providing reliable software on time and within budget. Software cost models relate the product error rate to the percent of the project labor that is required for independent verification and validation. The criticality of the software determines which cost model is used to estimate the labor required to develop the software. Software quality models yield an expected error discovery rate based on the software size, criticality, software development environment, and the level of competence of the project and developers with respect to the processes being employed.

  5. Supporting large scale applications on networks of workstations

    NASA Technical Reports Server (NTRS)

    Cooper, Robert; Birman, Kenneth P.

    1989-01-01

    Distributed applications on networks of workstations are an increasingly common way to satisfy computing needs. However, existing mechanisms for distributed programming exhibit poor performance and reliability as application size increases. Extension of the ISIS distributed programming system to support large scale distributed applications by providing hierarchical process groups is discussed. Incorporation of hierarchy in the program structure and exploitation of this to limit the communication and storage required in any one component of the distributed system is examined.

  6. Energy Advantages for Green Schools

    ERIC Educational Resources Information Center

    Griffin, J. Tim

    2012-01-01

    Because of many advantages associated with central utility systems, school campuses, from large universities to elementary schools, have used district energy for decades. District energy facilities enable thermal and electric utilities to be generated with greater efficiency and higher system reliability, while requiring fewer maintenance and…

  7. Electro-mechanical probe positioning system for large volume plasma device

    NASA Astrophysics Data System (ADS)

    Sanyasi, A. K.; Sugandhi, R.; Srivastava, P. K.; Srivastav, Prabhakar; Awasthi, L. M.

    2018-05-01

    An automated electro-mechanical system for the positioning of plasma diagnostics has been designed and implemented in a Large Volume Plasma Device (LVPD). The system consists of 12 electro-mechanical assemblies, which are orchestrated using the Modbus communication protocol on 4-wire RS485 communications to meet the experimental requirements. Each assembly has a lead screw-based mechanical structure, Wilson feed-through-based vacuum interface, bipolar stepper motor, micro-controller-based stepper drive, and optical encoder for online positioning correction of probes. The novelty of the system lies in the orchestration of multiple drives on a single interface, fabrication and installation of the system for a large experimental device like the LVPD, in-house developed software, and adopted architectural practices. The paper discusses the design, description of hardware and software interfaces, and performance results in LVPD.

  8. Successful Starshade Petal Deployment Tolerance Verification in Support of NASA's Technology Development for Exoplanet Missions

    NASA Technical Reports Server (NTRS)

    Webb, D.; Kasdin, N. J.; Lisman, D.; Shaklan, S.; Thomson, M.; Cady, E.; Marks, G. W.; Lo, A.

    2014-01-01

    A Starshade is a sunflower-shaped satellite with a large inner disk structure surrounded by petals that flies in formation with a space-borne telescope, creating a deep shadow around the telescope over a broad spectral band to permit nearby exoplanets to be viewed. Removing extraneous starlight before it enters the observatory optics greatly loosens the tolerances on the telescope and instrument that comprise the optical system, but the nature of the Starshade dictates a large deployable structure capable of deploying to a very precise shape. These shape requirements break down into key mechanical requirements, which include the rigid-body position and orientation of each of the petals that ring the periphery of the Starshade. To verify our capability to meet these requirements, we modified an existing flight-like Astromesh reflector, provided by Northrup Grumman, as the base ring to which the petals attach. The integrated system, including 4 of the 30 flight-like subscale petals, truss, connecting spokes and central hub, was deployed tens of times in a flight-like manner using a gravity compensation system. After each deployment, discrete points in prescribed locations covering the petals and truss were measured using a highly-accurate laser tracker system. These measurements were then compared against the mechanical requirements, and the as-measured data shows deployment accuracy well within our milestone requirements and resulting in a contrast ratio consistent with exoplanet detection and characterization.

  9. Successful Starshade petal deployment tolerance verification in support of NASA's technology development for exoplanet missions

    NASA Astrophysics Data System (ADS)

    Webb, D.; Kasdin, N. J.; Lisman, D.; Shaklan, S.; Thomson, M.; Cady, E.; Marks, G. W.; Lo, A.

    2014-07-01

    A Starshade is a sunflower-shaped satellite with a large inner disk structure surrounded by petals. A Starshade flies in formation with a space-borne telescope, creating a deep shadow around the telescope over a broad spectral band to permit nearby exoplanets to be viewed. Removing extraneous starlight before it enters the observatory optics greatly loosens the tolerances on the telescope and instrument that comprise the optical system, but the nature of the Starshade dictates a large deployable structure capable of deploying to a very precise shape. These shape requirements break down into key mechanical requirements which include the rigid-body position and orientation of each of the petals that ring the periphery of the Starshade. To verify our capability to meet these requirements, we modified an existing flight-like Astromesh reflector, provided by Northrup Grumman, as the base ring to which the petals attach. The integrated system, including 4 of the 30 flight-like subscale petals, truss, connecting spokes and central hub, was deployed tens of times in a flight-like manner using a gravity compensation system. After each deployment, discrete points in prescribed locations covering the petals and truss were measured using a highly-accurate laser tracker system. These measurements were then compared against the mechanical requirements, and the as-measured data shows deployment accuracy well within our milestone requirements and resulting in a contrast ratio consistent with exoplanet detection and characterization.

  10. Multiple damage identification on a wind turbine blade using a structural neural system

    NASA Astrophysics Data System (ADS)

    Kirikera, Goutham R.; Schulz, Mark J.; Sundaresan, Mannur J.

    2007-04-01

    A large number of sensors are required to perform real-time structural health monitoring (SHM) to detect acoustic emissions (AE) produced by damage growth on large complicated structures. This requires a large number of high sampling rate data acquisition channels to analyze high frequency signals. To overcome the cost and complexity of having such a large data acquisition system, a structural neural system (SNS) was developed. The SNS reduces the required number of data acquisition channels and predicts the location of damage within a sensor grid. The sensor grid uses interconnected sensor nodes to form continuous sensors. The combination of continuous sensors and the biomimetic parallel processing of the SNS tremendously reduce the complexity of SHM. A wave simulation algorithm (WSA) was developed to understand the flexural wave propagation in composite structures and to utilize the code for developing the SNS. Simulation of AE responses in a plate and comparison with experimental results are shown in the paper. The SNS was recently tested by a team of researchers from University of Cincinnati and North Carolina A&T State University during a quasi-static proof test of a 9 meter long wind turbine blade at the National Renewable Energy Laboratory (NREL) test facility in Golden, Colorado. Twelve piezoelectric sensor nodes were used to form four continuous sensors to monitor the condition of the blade during the test. The four continuous sensors are used as inputs to the SNS. There are only two analog output channels of the SNS, and these signals are digitized and analyzed in a computer to detect damage. In the test of the wind turbine blade, multiple damages were identified and later verified by sectioning of the blade. The results of damage identification using the SNS during this proof test will be shown in this paper. Overall, the SNS is very sensitive and can detect damage on complex structures with ribs, joints, and different materials, and the system relatively inexpensive and simple to implement on large structures.

  11. On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat

    NASA Astrophysics Data System (ADS)

    Hua, H.

    2016-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.

  12. Thermal Cycle Testing of the Powersphere Engineering Development Unit

    NASA Technical Reports Server (NTRS)

    Curtis, Henry; Piszczor, Mike; Kerslake, Thomas W.; Peterson, Todd T.; Scheiman, David A.; Simburger, Edward J.; Giants, Thomas W.; Matsumoto, James H.; Garcia, Alexander; Liu, Simon H.; hide

    2007-01-01

    During the past three years the team of The Aerospace Corporation, Lockheed Martin Space Systems, NASA Glenn Research Center, and ILC Dover LP have been developing a multifunctional inflatable structure for the PowerSphere concept under contract with NASA (NAS3-01115). The PowerSphere attitude insensitive solar power-generating microsatellite, which could be used for many different space and Earth science purposes, is ready for further refinement and flight demonstration. The development of micro- and nanosatellites requires the energy collection system, namely the solar array, to be of lightweight and small size. The limited surface area of these satellites precludes the possibility of body mounting the solar array system for required power generation. The use of large traditional solar arrays requires the support of large satellite volumes and weight and also requires a pointing apparatus. The current PowerSphere concept (geodetic sphere), which was envisioned in the late 1990 s by Mr. Simburger of The Aerospace Corporation, has been systematically developed in the past several years.1-7 The PowerSphere system is a low mass and low volume system suited for micro and nanosatellites. It is a lightweight solar array that is spherical in shape and does not require a pointing apparatus. The recently completed project culminated during the third year with the manufacturing of the PowerSphere Engineering Development Unit (EDU). One hemisphere of the EDU system was tested for packing and deployment and was subsequently rigidized. The other hemisphere was packed and stored for future testing in an uncured state. Both cured and uncured hemisphere components were delivered to NASA Glenn Research Center for thermal cycle testing and long-term storage respectively. This paper will discuss the design, thermal cycle testing of the PowerSphere EDU.

  13. First steps to lunar manufacturing: Results of the 1988 Space Studies Institute Lunar Systems Workshop

    NASA Technical Reports Server (NTRS)

    Maryniak, Gregg E.

    1992-01-01

    Prior studies by NASA and the Space Studies Institute have looked at the infrastructure required for the construction of solar power satellites (SPS) and other valuable large space systems from lunar materials. This paper discusses the results of a Lunar Systems Workshop conducted in January 1988. The workshop identified components of the infrastructure that could be implemented in the near future to create a revenue stream. These revenues could then be used to 'bootstrap' the additional elements required to begin the commercial use of nonterrestrial materials.

  14. Design and implementation of distributed multimedia surveillance system based on object-oriented middleware

    NASA Astrophysics Data System (ADS)

    Cao, Xuesong; Jiang, Ling; Hu, Ruimin

    2006-10-01

    Currently, the applications of surveillance system have been increasingly widespread. But there are few surveillance platforms that can meet the requirement of large-scale, cross-regional, and flexible surveillance business. In the paper, we present a distributed surveillance system platform to improve safety and security of the society. The system is constructed by an object-oriented middleware called as Internet Communications Engine (ICE). This middleware helps our platform to integrate a lot of surveillance resource of the society and accommodate diverse range of surveillance industry requirements. In the follow sections, we will describe in detail the design concepts of system and introduce traits of ICE.

  15. The architecture of the High Performance Storage System (HPSS)

    NASA Technical Reports Server (NTRS)

    Teaff, Danny; Watson, Dick; Coyne, Bob

    1994-01-01

    The rapid growth in the size of datasets has caused a serious imbalance in I/O and storage system performance and functionality relative to application requirements and the capabilities of other system components. The High Performance Storage System (HPSS) is a scalable, next-generation storage system that will meet the functionality and performance requirements or large-scale scientific and commercial computing environments. Our goal is to improve the performance and capacity of storage by two orders of magnitude or more over what is available in the general or mass marketplace today. We are also providing corresponding improvements in architecture and functionality. This paper describes the architecture and functionality of HPSS.

  16. Systems engineering for very large systems

    NASA Technical Reports Server (NTRS)

    Lewkowicz, Paul E.

    1993-01-01

    Very large integrated systems have always posed special problems for engineers. Whether they are power generation systems, computer networks or space vehicles, whenever there are multiple interfaces, complex technologies or just demanding customers, the challenges are unique. 'Systems engineering' has evolved as a discipline in order to meet these challenges by providing a structured, top-down design and development methodology for the engineer. This paper attempts to define the general class of problems requiring the complete systems engineering treatment and to show how systems engineering can be utilized to improve customer satisfaction and profit ability. Specifically, this work will focus on a design methodology for the largest of systems, not necessarily in terms of physical size, but in terms of complexity and interconnectivity.

  17. Systems engineering for very large systems

    NASA Astrophysics Data System (ADS)

    Lewkowicz, Paul E.

    Very large integrated systems have always posed special problems for engineers. Whether they are power generation systems, computer networks or space vehicles, whenever there are multiple interfaces, complex technologies or just demanding customers, the challenges are unique. 'Systems engineering' has evolved as a discipline in order to meet these challenges by providing a structured, top-down design and development methodology for the engineer. This paper attempts to define the general class of problems requiring the complete systems engineering treatment and to show how systems engineering can be utilized to improve customer satisfaction and profit ability. Specifically, this work will focus on a design methodology for the largest of systems, not necessarily in terms of physical size, but in terms of complexity and interconnectivity.

  18. Factorization in large-scale many-body calculations

    DOE PAGES

    Johnson, Calvin W.; Ormand, W. Erich; Krastev, Plamen G.

    2013-08-07

    One approach for solving interacting many-fermion systems is the configuration-interaction method, also sometimes called the interacting shell model, where one finds eigenvalues of the Hamiltonian in a many-body basis of Slater determinants (antisymmetrized products of single-particle wavefunctions). The resulting Hamiltonian matrix is typically very sparse, but for large systems the nonzero matrix elements can nonetheless require terabytes or more of storage. An alternate algorithm, applicable to a broad class of systems with symmetry, in our case rotational invariance, is to exactly factorize both the basis and the interaction using additive/multiplicative quantum numbers; such an algorithm recreates the many-body matrix elementsmore » on the fly and can reduce the storage requirements by an order of magnitude or more. Here, we discuss factorization in general and introduce a novel, generalized factorization method, essentially a ‘double-factorization’ which speeds up basis generation and set-up of required arrays. Although we emphasize techniques, we also place factorization in the context of a specific (unpublished) configuration-interaction code, BIGSTICK, which runs both on serial and parallel machines, and discuss the savings in memory due to factorization.« less

  19. Integrated Library Systems in Canadian Public, Academic and Special Libraries: The Sixth Annual Survey.

    ERIC Educational Resources Information Center

    Merilees, Bobbie

    1992-01-01

    Reports results of a survey of vendors of large and microcomputer-based integrated library systems. Data presented on Canadian installations include total systems installed, comparisons with earlier years, market segments, and installations by type of library (excluding school). International sales and automation requirements for music are…

  20. Computer software to estimate timber harvesting system production, cost, and revenue

    Treesearch

    Dr. John E. Baumgras; Dr. Chris B. LeDoux

    1992-01-01

    Large variations in timber harvesting cost and revenue can result from the differences between harvesting systems, the variable attributes of harvesting sites and timber stands, or changing product markets. Consequently, system and site specific estimates of production rates and costs are required to improve estimates of harvesting revenue. This paper describes...

  1. Solar heating for a restaurant--North Little Rock, Arkansas

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Hot water consumption of large building affects solar-energy system design. Continual demand for hot water at restaurant makes storage less important than at other sites. Storage capacity of system installed in December 1979 equals estimated daily hot-water requirement. Report describes equipment specifications and modifications to existing building heating and hot water systems.

  2. 77 FR 69694 - Determination of Foreign Exchange Swaps and Foreign Exchange Forwards Under the Commodity...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-20

    ... trading and clearing of foreign exchange swaps and foreign exchange forwards would create systemic risk... clearing and exchange trading requirements on the foreign exchange market would increase systemic risk by... argue that the exemption would create a large regulatory loophole that could exacerbate systemic risk...

  3. Organizational Agility and Complex Enterprise System Innovations: A Mixed Methods Study of the Effects of Enterprise Systems on Organizational Agility

    ERIC Educational Resources Information Center

    Kharabe, Amol T.

    2012-01-01

    Over the last two decades, firms have operated in "increasingly" accelerated "high-velocity" dynamic markets, which require them to become "agile." During the same time frame, firms have increasingly deployed complex enterprise systems--large-scale packaged software "innovations" that integrate and automate…

  4. Application of Small-Scale Systems: Evaluation of Alternatives

    Treesearch

    John Wilhoit; Robert Rummer

    1999-01-01

    Large-scale mechanized systems are not well-suited for harvesting smaller tracts of privately owned forest land. New alternative small-scale harvesting systems are needed which utilize mechanized felling, have a low capital investment requirement, are small in physical size, and are based primarily on adaptations of current harvesting technology. This paper presents...

  5. Mental Health Workforce Change through Social Work Education: A California Case Study

    ERIC Educational Resources Information Center

    Foster, Gwen; Morris, Meghan Brenna; Sirojudin, Sirojudin

    2013-01-01

    The 2004 California Mental Health Services Act requires large-scale system change in the public mental health system through a shift to recovery-oriented services for diverse populations. This article describes an innovative strategy for workforce recruitment and retention to create and sustain these systemic changes. The California Social Work…

  6. A hypertext system that learns from user feedback

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie

    1994-01-01

    Retrieving specific information from large amounts of documentation is not an easy task. It could be facilitated if information relevant in the current problem solving context could be automatically supplied to the user. As a first step towards this goal, we have developed an intelligent hypertext system called CID (Computer Integrated Documentation). Besides providing an hypertext interface for browsing large documents, the CID system automatically acquires and reuses the context in which previous searches were appropriate. This mechanism utilizes on-line user information requirements and relevance feedback either to reinforce current indexing in case of success or to generate new knowledge in case of failure. Thus, the user continually augments and refines the intelligence of the retrieval system. This allows the CID system to provide helpful responses, based on previous usage of the documentation, and to improve its performance over time. We successfully tested the CID system with users of the Space Station Freedom requirements documents. We are currently extending CID to other application domains (Space Shuttle operations documents, airplane maintenance manuals, and on-line training). We are also exploring the potential commercialization of this technique.

  7. Control considerations for high frequency, resonant, power processing equipment used in large systems

    NASA Technical Reports Server (NTRS)

    Mildice, J. W.; Schreiner, K. E.; Wolff, F.

    1987-01-01

    Addressed is a class of resonant power processing equipment designed to be used in an integrated high frequency (20 KHz domain), utility power system for large, multi-user spacecraft and other aerospace vehicles. It describes a hardware approach, which has been the basis for parametric and physical data used to justify the selection of high frequency ac as the PMAD baseline for the space station. This paper is part of a larger effort undertaken by NASA and General Dynamics to be sure that all potential space station contractors and other aerospace power system designers understand and can comfortably use this technology, which is now widely used in the commercial sector. In this paper, we will examine control requirements, stability, and operational modes; and their hardware impacts from an integrated system point of view. The current space station PMAD system will provide the overall requirements model to develop an understanding of the performance of this type of system with regard to: (1) regulation; (2) power bus stability and voltage control; (3) source impedance; (4) transient response; (5) power factor effects, and (6) limits and overloads.

  8. Seismic isolation device having charging function by a transducer

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Takashi; Miura, Nanako; Takahashi, Masaki

    2016-04-01

    In late years, many base isolated structures are planned as the seismic design, because they suppress vibration response significantly against large earthquake. To achieve greater safety, semi-active or active vibration control system is installed in the structures as earthquake countermeasures. Semi-active and active vibration control systems are more effective than passive vibration control system to large earthquake in terms of vibration reduction. However semi-active and active vibration control system cannot operate as required when external power supply is cut off. To solve the problem of energy consumption, we propose a self-powered active seismic isolation floor which achieve active control system using regenerated vibration energy. This device doesn't require external energy to produce control force. The purpose of this study is to propose the seismic isolation device having charging function and to optimize the control system and passive elements such as spring coefficients and damping coefficients using genetic algorithm. As a result, optimized model shows better performance in terms of vibration reduction and electric power regeneration than the previous model. At the end of this paper, the experimental specimen of the proposed isolation device is shown.

  9. The systems engineering overview and process (from the Systems Engineering Management Guide, 1990)

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The past several decades have seen the rise of large, highly interactive systems that are on the forward edge of technology. As a result of this growth and the increased usage of digital systems (computers and software), the concept of systems engineering has gained increasing attention. Some of this attention is no doubt due to large program failures which possibly could have been avoided, or at least mitigated, through the use of systems engineering principles. The complexity of modern day weapon systems requires conscious application of systems engineering concepts to ensure producible, operable and supportable systems that satisfy mission requirements. Although many authors have traced the roots of systems engineering to earlier dates, the initial formalization of the systems engineering process for military development began to surface in the mid-1950s on the ballistic missile programs. These early ballistic missile development programs marked the emergence of engineering discipline 'specialists' which has since continued to grow. Each of these specialties not only has a need to take data from the overall development process, but also to supply data, in the form of requirements and analysis results, to the process. A number of technical instructions, military standards and specifications, and manuals were developed as a result of these development programs. In particular, MILSTD-499 was issued in 1969 to assist both government and contractor personnel in defining the systems engineering effort in support of defense acquisition programs. This standard was updated to MIL-STD499A in 1974, and formed the foundation for current application of systems engineering principles to military development programs.

  10. The systems engineering overview and process (from the Systems Engineering Management Guide, 1990)

    NASA Astrophysics Data System (ADS)

    The past several decades have seen the rise of large, highly interactive systems that are on the forward edge of technology. As a result of this growth and the increased usage of digital systems (computers and software), the concept of systems engineering has gained increasing attention. Some of this attention is no doubt due to large program failures which possibly could have been avoided, or at least mitigated, through the use of systems engineering principles. The complexity of modern day weapon systems requires conscious application of systems engineering concepts to ensure producible, operable and supportable systems that satisfy mission requirements. Although many authors have traced the roots of systems engineering to earlier dates, the initial formalization of the systems engineering process for military development began to surface in the mid-1950s on the ballistic missile programs. These early ballistic missile development programs marked the emergence of engineering discipline 'specialists' which has since continued to grow. Each of these specialties not only has a need to take data from the overall development process, but also to supply data, in the form of requirements and analysis results, to the process. A number of technical instructions, military standards and specifications, and manuals were developed as a result of these development programs. In particular, MILSTD-499 was issued in 1969 to assist both government and contractor personnel in defining the systems engineering effort in support of defense acquisition programs. This standard was updated to MIL-STD499A in 1974, and formed the foundation for current application of systems engineering principles to military development programs.

  11. Overview and Summary of Advanced UVOIR Mirror Technology Development (AMTD) Project

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip

    2014-01-01

    ASTRO2010 Decadal Survey stated that an advanced large-aperture ultraviolet, optical, near-infrared (UVOIR) telescope is required to enable the next generation of compelling astrophysics and exoplanet science; and, that present technology is not mature enough to affordably build and launch any potential UVOIR mission concept. AMTD is a multiyear effort to develop, demonstrate and mature critical technologies to TRL-6 by 2018 so that a viable flight mission can be proposed to the 2020 Decadal Review. AMTD builds on the state of art (SOA) defined by over 30 years of monolithic & segmented ground & space-telescope mirror technology to mature six key technologies: center dotLarge-Aperture, Low Areal Density, High Stiffness Mirror Substrates: Both (4 to 8 m) monolithic and (8 to 16 m) segmented telescopes require larger and stiffer mirrors. center dotSupport System: Large-aperture mirrors require large support systems to ensure that they survive launch, deploy on orbit, and maintain a stable, undistorted shape. center dotMid/High Spatial Frequency Figure Error: Very smooth mirror is critical for producing high-quality point spread function (PSF) for high contrast imaging. center dotSegment Edges: The quality of segment edges impacts PSF for high-contrast imaging applications, contributes to stray light noise, and affects total collecting aperture. center dotSegment to Segment Gap Phasing: Segment phasing is critical for producing high-quality temporally-stable PSF. center dotIntegrated Model Validation: On-orbit performance is driven by mechanical & thermal stability. Compliance cannot be 100% tested, but relies on modeling. Because we cannot predict the future, AMTD is pursuing multiple design paths to provide the science community with options to enable either large aperture monolithic or segmented mirrors with clear engineering metrics traceable to science requirements

  12. A Medical Television Center; a Guide to Organizing a Large Television Center in Health Science Educational Institutions. Monograph 5.

    ERIC Educational Resources Information Center

    Potts, Robert E.

    Guidelines are presented for establishing large television centers in health science education institutions. Television distribution systems are described, and staff, equipment, space and budgetary requirements are discussed. Included are: (1) a proposed chart of organizational development and job descriptions; (2) suggested equipment purchases;…

  13. REDUCING THE WASTE STREAM: BRINGING ENVIRONMENTAL, ECONOMICAL, AND EDUCATIONAL COMPOSTING TO A LIBERAL ARTS COLLEGE

    EPA Science Inventory

    The Northfield, Minnesota area contains three institutions that produce a large amount of compostable food waste. St. Olaf College uses a large-scale on-site composting machine that effectively transforms the food waste to compost, but the system requires an immense start-up c...

  14. User Oriented Techniques to Support Interaction and Decision Making with Large Educational Databases

    ERIC Educational Resources Information Center

    Hartley, Roger; Almuhaidib, Saud M. Y.

    2007-01-01

    Information Technology is developing rapidly and providing policy/decision makers with large amounts of information that require processing and analysis. Decision support systems (DSS) aim to provide tools that not only help such analyses, but enable the decision maker to experiment and simulate the effects of different policies and selection…

  15. Integrating complexity into data-driven multi-hazard supply chain network strategies

    USGS Publications Warehouse

    Long, Suzanna K.; Shoberg, Thomas G.; Ramachandran, Varun; Corns, Steven M.; Carlo, Hector J.

    2013-01-01

    Major strategies in the wake of a large-scale disaster have focused on short-term emergency response solutions. Few consider medium-to-long-term restoration strategies that reconnect urban areas to the national supply chain networks (SCN) and their supporting infrastructure. To re-establish this connectivity, the relationships within the SCN must be defined and formulated as a model of a complex adaptive system (CAS). A CAS model is a representation of a system that consists of large numbers of inter-connections, demonstrates non-linear behaviors and emergent properties, and responds to stimulus from its environment. CAS modeling is an effective method of managing complexities associated with SCN restoration after large-scale disasters. In order to populate the data space large data sets are required. Currently access to these data is hampered by proprietary restrictions. The aim of this paper is to identify the data required to build a SCN restoration model, look at the inherent problems associated with these data, and understand the complexity that arises due to integration of these data.

  16. Measuring the embodied energy in drinking water supply systems: a case study in the Great Lakes region.

    PubMed

    Mo, Weiwei; Nasiri, Fuzhan; Eckelman, Matthew J; Zhang, Qiong; Zimmerman, Julie B

    2010-12-15

    A sustainable supply of both energy and water is critical to long-term national security, effective climate policy, natural resource sustainability, and social wellbeing. These two critical resources are inextricably and reciprocally linked; the production of energy requires large volumes of water, while the treatment and distribution of water is also significantly dependent upon energy. In this paper, a hybrid analysis approach is proposed to estimate embodied energy and to perform a structural path analysis of drinking water supply systems. The applicability of this approach is then tested through a case study of a large municipal water utility (city of Kalamazoo) in the Great Lakes region to provide insights on the issues of water-energy pricing and carbon footprints. Kalamazoo drinking water requires approximately 9.2 MJ/m(3) of energy to produce, 30% of which is associated with indirect inputs such as system construction and treatment chemicals.

  17. Automated biosurveillance data from England and Wales, 1991-2011.

    PubMed

    Enki, Doyo G; Noufaily, Angela; Garthwaite, Paul H; Andrews, Nick J; Charlett, André; Lane, Chris; Farrington, C Paddy

    2013-01-01

    Outbreak detection systems for use with very large multiple surveillance databases must be suited both to the data available and to the requirements of full automation. To inform the development of more effective outbreak detection algorithms, we analyzed 20 years of data (1991-2011) from a large laboratory surveillance database used for outbreak detection in England and Wales. The data relate to 3,303 distinct types of infectious pathogens, with a frequency range spanning 6 orders of magnitude. Several hundred organism types were reported each week. We describe the diversity of seasonal patterns, trends, artifacts, and extra-Poisson variability to which an effective multiple laboratory-based outbreak detection system must adjust. We provide empirical information to guide the selection of simple statistical models for automated surveillance of multiple organisms, in the light of the key requirements of such outbreak detection systems, namely, robustness, flexibility, and sensitivity.

  18. Motion simulator study of longitudinal stability requirements for large delta wing transport airplanes during approach and landing with stability augmentation systems failed

    NASA Technical Reports Server (NTRS)

    Snyder, C. T.; Fry, E. B.; Drinkwater, F. J., III; Forrest, R. D.; Scott, B. C.; Benefield, T. D.

    1972-01-01

    A ground-based simulator investigation was conducted in preparation for and correlation with an-flight simulator program. The objective of these studies was to define minimum acceptable levels of static longitudinal stability for landing approach following stability augmentation systems failures. The airworthiness authorities are presently attempting to establish the requirements for civil transports with only the backup flight control system operating. Using a baseline configuration representative of a large delta wing transport, 20 different configurations, many representing negative static margins, were assessed by three research test pilots in 33 hours of piloted operation. Verification of the baseline model to be used in the TIFS experiment was provided by computed and piloted comparisons with a well-validated reference airplane simulation. Pilot comments and ratings are included, as well as preliminary tracking performance and workload data.

  19. Results of the harmonics measurement program at the John F. Long photovoltaic house

    NASA Astrophysics Data System (ADS)

    Campen, G. L.

    1982-03-01

    Photovoltaic (PV) systems used in single-family dwellings require an inverter to act as an interface between the direct-current (dc) power output of the PV unit and the alternating-current (ac) power needed by house loads. A type of inverter known as line commutated injects harmonic currents on the ac side and requires large amounts of reactive power. Large numbers of such PV installations could lead to unacceptable levels of harmonic voltages on the utility system, and the need to increase the utility's deliver of reactive power could result in significant cost increases. The harmonics and power-factor effects are examined for a single PV installation using a line-commutated inverter. The magnitude and phase of various currents and voltages from the fundamental to the 13th harmonic were recorded both with and without the operation of the PV system.

  20. Incremental wind tunnel testing of high lift systems

    NASA Astrophysics Data System (ADS)

    Victor, Pricop Mihai; Mircea, Boscoianu; Daniel-Eugeniu, Crunteanu

    2016-06-01

    Efficiency of trailing edge high lift systems is essential for long range future transport aircrafts evolving in the direction of laminar wings, because they have to compensate for the low performance of the leading edge devices. Modern high lift systems are subject of high performance requirements and constrained to simple actuation, combined with a reduced number of aerodynamic elements. Passive or active flow control is thus required for the performance enhancement. An experimental investigation of reduced kinematics flap combined with passive flow control took place in a low speed wind tunnel. The most important features of the experimental setup are the relatively large size, corresponding to a Reynolds number of about 2 Million, the sweep angle of 30 degrees corresponding to long range airliners with high sweep angle wings and the large number of flap settings and mechanical vortex generators. The model description, flap settings, methodology and results are presented.

  1. Ontology-based tools to expedite predictive model construction.

    PubMed

    Haug, Peter; Holmen, John; Wu, Xinzi; Mynam, Kumar; Ebert, Matthew; Ferraro, Jeffrey

    2014-01-01

    Large amounts of medical data are collected electronically during the course of caring for patients using modern medical information systems. This data presents an opportunity to develop clinically useful tools through data mining and observational research studies. However, the work necessary to make sense of this data and to integrate it into a research initiative can require substantial effort from medical experts as well as from experts in medical terminology, data extraction, and data analysis. This slows the process of medical research. To reduce the effort required for the construction of computable, diagnostic predictive models, we have developed a system that hybridizes a medical ontology with a large clinical data warehouse. Here we describe components of this system designed to automate the development of preliminary diagnostic models and to provide visual clues that can assist the researcher in planning for further analysis of the data behind these models.

  2. Automated Biosurveillance Data from England and Wales, 1991–2011

    PubMed Central

    Enki, Doyo G.; Noufaily, Angela; Garthwaite, Paul H.; Andrews, Nick J.; Charlett, André; Lane, Chris

    2013-01-01

    Outbreak detection systems for use with very large multiple surveillance databases must be suited both to the data available and to the requirements of full automation. To inform the development of more effective outbreak detection algorithms, we analyzed 20 years of data (1991–2011) from a large laboratory surveillance database used for outbreak detection in England and Wales. The data relate to 3,303 distinct types of infectious pathogens, with a frequency range spanning 6 orders of magnitude. Several hundred organism types were reported each week. We describe the diversity of seasonal patterns, trends, artifacts, and extra-Poisson variability to which an effective multiple laboratory-based outbreak detection system must adjust. We provide empirical information to guide the selection of simple statistical models for automated surveillance of multiple organisms, in the light of the key requirements of such outbreak detection systems, namely, robustness, flexibility, and sensitivity. PMID:23260848

  3. An Inverter Packaging Scheme for an Integrated Segmented Traction Drive System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Gui-Jia; Tang, Lixin; Ayers, Curtis William

    The standard voltage source inverter (VSI), widely used in electric vehicle/hybrid electric vehicle (EV/HEV) traction drives, requires a bulky dc bus capacitor to absorb the large switching ripple currents and prevent them from shortening the battery s life. The dc bus capacitor presents a significant barrier to meeting inverter cost, volume, and weight requirements for mass production of affordable EVs/HEVs. The large ripple currents become even more problematic for the film capacitors (the capacitor technology of choice for EVs/HEVs) in high temperature environments as their ripple current handling capability decreases rapidly with rising temperatures. It is shown in previous workmore » that segmenting the VSI based traction drive system can significantly decrease the ripple currents and thus the size of the dc bus capacitor. This paper presents an integrated packaging scheme to reduce the system cost of a segmented traction drive.« less

  4. Digitally controlled twelve-pulse firing generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berde, D.; Ferrara, A.A.

    1981-01-01

    Control System Studies for the Tokamak Fusion Test Reactor (TFTR) indicate that accurate thyristor firing in the AC-to-DC conversion system is required in order to achieve good regulation of the various field currents. Rapid update and exact firing angle control are required to avoid instabilities, large eddy currents, or parasitic oscillations. The Prototype Firing Generator was designed to satisfy these requirements. To achieve the required /plus or minus/0.77/degree/firing accuracy, a three-phase-locked loop reference was designed; otherwise, the Firing Generator employs digital circuitry. The unit, housed in a standard CAMAC crate, operates under microcomputer control. Functions are performed under program control,more » which resides in nonvolatile read-only memory. Communication with CICADA control system is provided via an 11-bit parallel interface.« less

  5. Bolometric detector systems for IR and mm-wave space astronomy

    NASA Technical Reports Server (NTRS)

    Church, S. E.; Lange, A. E.; Mauskopf, P. D.; Hristov, V.; Bock, J. J.; DelCastillo, H. M.; Beeman, J.; Ade, P. A. R.; Griffin, M. J.

    1996-01-01

    Recent developments in bolometric detector systems for millimeter and submillimeter wave space astronomy are described. Current technologies meet all the requirements for the high frequency instrument onboard the cosmic background radiation anisotropy satellite/satellite for the measurement of background anisotropies (COBRAS/SAMBA) platform. It is considered that the technologies that are currently being developed will significantly reduce the effective time constant and/or the cooling requirements of bolometric detectors. These technologies lend themselves to the fabrication of the large format arrays required for the Far Infrared and Submillimeter Space Telescope (FIRST). The scientific goals and detector requirements of the COBRAS/SAMBA platform that will use infrared bolometers are reviewed and the baseline detector system is described, including the feed optics, the infrared filters, the cold amplifiers and the warm readout electronics.

  6. Balloon concepts for scientific investigation of Mars and Jupiter

    NASA Technical Reports Server (NTRS)

    Ash, R. L.

    1979-01-01

    Opportunities for scientific investigation of the atmospheric planets using buoyant balloons have been explored. Mars and Jupiter were considered in this study because design requirements at those planets bracket nominally the requirements at Venus, and plans are already underway for a joint Russian-French balloon system at Venus. Viking data has provided quantitative information for definition of specific balloon systems at Mars. Free flying balloons appear capable of providing valuable scientific support for more sophisticated Martian surface probes, but tethered and powered aerostats are not attractive. The Jovian environment is so extreme, hot atmosphere balloons may be the only scientific platforms capable of extended operations there. However, the estimated system mass and thermal energy required are very large.

  7. Fabrication of near-net shape graphite/magnesium composites for large mirrors

    NASA Astrophysics Data System (ADS)

    Wendt, Robert; Misra, Mohan

    1990-10-01

    Successful development of space-based surveillance and laser systems will require large precision mirrors which are dimensionally stable under thermal, static, and dynamic (i.e., structural vibrations and retargeting) loading conditions. Among the advanced composites under consideration for large space mirrors, graphite fiber reinforced magnesium (Gr/Mg) is an ideal candidate material that can be tailored to obtain an optimum combination of properties, including a high modulus of elasticity, zero coefficient of thermal expansion, low density, and high thermal conductivity. In addition, an innovative technique, combining conventional filament winding and vacuum casting has been developed to produce near-net shape Gr/Mg composites. This approach can significantly reduce the cost of fabricating large mirrors by decreasing required machining. However, since Gr/Mg cannot be polished to a reflective surface, plating is required. This paper will review research at Martin Marietta Astronautics Group on Gr/Mg mirror blank fabrication and measured mechanical and thermal properties. Also, copper plating and polishing methods, and optical surface characteristics will be presented.

  8. Large Animal Models of an In Vivo Bioreactor for Engineering Vascularized Bone.

    PubMed

    Akar, Banu; Tatara, Alexander M; Sutradhar, Alok; Hsiao, Hui-Yi; Miller, Michael; Cheng, Ming-Huei; Mikos, Antonios G; Brey, Eric M

    2018-04-12

    Reconstruction of large skeletal defects is challenging due to the requirement for large volumes of donor tissue and the often complex surgical procedures. Tissue engineering has the potential to serve as a new source of tissue for bone reconstruction, but current techniques are often limited in regards to the size and complexity of tissue that can be formed. Building tissue using an in vivo bioreactor approach may enable the production of appropriate amounts of specialized tissue, while reducing issues of donor site morbidity and infection. Large animals are required to screen and optimize new strategies for growing clinically appropriate volumes of tissues in vivo. In this article, we review both ovine and porcine models that serve as models of the technique proposed for clinical engineering of bone tissue in vivo. Recent findings are discussed with these systems, as well as description of next steps required for using these models, to develop clinically applicable tissue engineering applications.

  9. Performance of ceramic superconductors in magnetic bearings

    NASA Technical Reports Server (NTRS)

    Kirtley, James L., Jr.; Downer, James R.

    1993-01-01

    Magnetic bearings are large-scale applications of magnet technology, quite similar in certain ways to synchronous machinery. They require substantial flux density over relatively large volumes of space. Large flux density is required to have satisfactory force density. Satisfactory dynamic response requires that magnetic circuit permeances not be too large, implying large air gaps. Superconductors, which offer large magnetomotive forces and high flux density in low permeance circuits, appear to be desirable in these situations. Flux densities substantially in excess of those possible with iron can be produced, and no ferromagnetic material is required. Thus the inductance of active coils can be made low, indicating good dynamic response of the bearing system. The principal difficulty in using superconductors is, of course, the deep cryogenic temperatures at which they must operate. Because of the difficulties in working with liquid helium, the possibility of superconductors which can be operated in liquid nitrogen is thought to extend the number and range of applications of superconductivity. Critical temperatures of about 98 degrees Kelvin were demonstrated in a class of materials which are, in fact, ceramics. Quite a bit of public attention was attracted to these new materials. There is a difficulty with the ceramic superconducting materials which were developed to date. Current densities sufficient for use in large-scale applications have not been demonstrated. In order to be useful, superconductors must be capable of carrying substantial currents in the presence of large magnetic fields. The possible use of ceramic superconductors in magnetic bearings is investigated and discussed and requirements that must be achieved by superconductors operating at liquid nitrogen temperatures to make their use comparable with niobium-titanium superconductors operating at liquid helium temperatures are identified.

  10. Platform options for the Space Station program

    NASA Technical Reports Server (NTRS)

    Mangano, M. J.; Rowley, R. W.

    1986-01-01

    Platforms for polar and 28.5 deg orbits were studied to determine the platform requirements and characteristics necessary to support the science objectives. Large platforms supporting the Earth-Observing System (EOS) were initially studied. Co-orbiting platforms were derived from these designs. Because cost estimates indicated that the large platform approach was likely to be too expensive, require several launches, and generally be excessively complex, studies of small platforms were undertaken. Results of these studies show the small platform approach to be technically feasible at lower overall cost. All designs maximized hardware inheritance from the Space Station program to reduce costs. Science objectives as defined at the time of these studies are largely achievable.

  11. Open solutions to distributed control in ground tracking stations

    NASA Technical Reports Server (NTRS)

    Heuser, William Randy

    1994-01-01

    The advent of high speed local area networks has made it possible to interconnect small, powerful computers to function together as a single large computer. Today, distributed computer systems are the new paradigm for large scale computing systems. However, the communications provided by the local area network is only one part of the solution. The services and protocols used by the application programs to communicate across the network are as indispensable as the local area network. And the selection of services and protocols that do not match the system requirements will limit the capabilities, performance, and expansion of the system. Proprietary solutions are available but are usually limited to a select set of equipment. However, there are two solutions based on 'open' standards. The question that must be answered is 'which one is the best one for my job?' This paper examines a model for tracking stations and their requirements for interprocessor communications in the next century. The model and requirements are matched with the model and services provided by the five different software architectures and supporting protocol solutions. Several key services are examined in detail to determine which services and protocols most closely match the requirements for the tracking station environment. The study reveals that the protocols are tailored to the problem domains for which they were originally designed. Further, the study reveals that the process control model is the closest match to the tracking station model.

  12. 76 FR 39470 - Integrated Resource Plan

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-06

    ... region's natural resources. One component of this mission is the generation, transmission, and sale of reliable and affordable electric energy. TVA operates the nation's largest public power system, producing 4... 56 directly served large industrial and Federal customers. The TVA Act requires the TVA power system...

  13. On the decentralized control of large-scale systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chong, C.

    1973-01-01

    The decentralized control of stochastic large scale systems was considered. Particular emphasis was given to control strategies which utilize decentralized information and can be computed in a decentralized manner. The deterministic constrained optimization problem is generalized to the stochastic case when each decision variable depends on different information and the constraint is only required to be satisfied on the average. For problems with a particular structure, a hierarchical decomposition is obtained. For the stochastic control of dynamic systems with different information sets, a new kind of optimality is proposed which exploits the coupled nature of the dynamic system. The subsystems are assumed to be uncoupled and then certain constraints are required to be satisfied, either in a off-line or on-line fashion. For off-line coordination, a hierarchical approach of solving the problem is obtained. The lower level problems are all uncoupled. For on-line coordination, distinction is made between open loop feedback optimal coordination and closed loop optimal coordination.

  14. Static Schedulers for Embedded Real-Time Systems

    DTIC Science & Technology

    1989-12-01

    Because of the need for having efficient scheduling algorithms in large scale real time systems , software engineers put a lot of effort on developing...provide static schedulers for he Embedded Real Time Systems with single processor using Ada programming language. The independent nonpreemptable...support the Computer Aided Rapid Prototyping for Embedded Real Time Systems so that we determine whether the system, as designed, meets the required

  15. Development of a real-time microchip PCR system for portable plant disease diagnosis.

    PubMed

    Koo, Chiwan; Malapi-Wight, Martha; Kim, Hyun Soo; Cifci, Osman S; Vaughn-Diaz, Vanessa L; Ma, Bo; Kim, Sungman; Abdel-Raziq, Haron; Ong, Kevin; Jo, Young-Ki; Gross, Dennis C; Shim, Won-Bo; Han, Arum

    2013-01-01

    Rapid and accurate detection of plant pathogens in the field is crucial to prevent the proliferation of infected crops. Polymerase chain reaction (PCR) process is the most reliable and accepted method for plant pathogen diagnosis, however current conventional PCR machines are not portable and require additional post-processing steps to detect the amplified DNA (amplicon) of pathogens. Real-time PCR can directly quantify the amplicon during the DNA amplification without the need for post processing, thus more suitable for field operations, however still takes time and require large instruments that are costly and not portable. Microchip PCR systems have emerged in the past decade to miniaturize conventional PCR systems and to reduce operation time and cost. Real-time microchip PCR systems have also emerged, but unfortunately all reported portable real-time microchip PCR systems require various auxiliary instruments. Here we present a stand-alone real-time microchip PCR system composed of a PCR reaction chamber microchip with integrated thin-film heater, a compact fluorescence detector to detect amplified DNA, a microcontroller to control the entire thermocycling operation with data acquisition capability, and a battery. The entire system is 25 × 16 × 8 cm(3) in size and 843 g in weight. The disposable microchip requires only 8-µl sample volume and a single PCR run consumes 110 mAh of power. A DNA extraction protocol, notably without the use of liquid nitrogen, chemicals, and other large lab equipment, was developed for field operations. The developed real-time microchip PCR system and the DNA extraction protocol were used to successfully detect six different fungal and bacterial plant pathogens with 100% success rate to a detection limit of 5 ng/8 µl sample.

  16. Development of a Real-Time Microchip PCR System for Portable Plant Disease Diagnosis

    PubMed Central

    Kim, Hyun Soo; Cifci, Osman S.; Vaughn-Diaz, Vanessa L.; Ma, Bo; Kim, Sungman; Abdel-Raziq, Haron; Ong, Kevin; Jo, Young-Ki; Gross, Dennis C.; Shim, Won-Bo; Han, Arum

    2013-01-01

    Rapid and accurate detection of plant pathogens in the field is crucial to prevent the proliferation of infected crops. Polymerase chain reaction (PCR) process is the most reliable and accepted method for plant pathogen diagnosis, however current conventional PCR machines are not portable and require additional post-processing steps to detect the amplified DNA (amplicon) of pathogens. Real-time PCR can directly quantify the amplicon during the DNA amplification without the need for post processing, thus more suitable for field operations, however still takes time and require large instruments that are costly and not portable. Microchip PCR systems have emerged in the past decade to miniaturize conventional PCR systems and to reduce operation time and cost. Real-time microchip PCR systems have also emerged, but unfortunately all reported portable real-time microchip PCR systems require various auxiliary instruments. Here we present a stand-alone real-time microchip PCR system composed of a PCR reaction chamber microchip with integrated thin-film heater, a compact fluorescence detector to detect amplified DNA, a microcontroller to control the entire thermocycling operation with data acquisition capability, and a battery. The entire system is 25×16×8 cm3 in size and 843 g in weight. The disposable microchip requires only 8-µl sample volume and a single PCR run consumes 110 mAh of power. A DNA extraction protocol, notably without the use of liquid nitrogen, chemicals, and other large lab equipment, was developed for field operations. The developed real-time microchip PCR system and the DNA extraction protocol were used to successfully detect six different fungal and bacterial plant pathogens with 100% success rate to a detection limit of 5 ng/8 µl sample. PMID:24349341

  17. New technologies for HWIL testing of WFOV, large-format FPA sensor systems

    NASA Astrophysics Data System (ADS)

    Fink, Christopher

    2016-05-01

    Advancements in FPA density and associated wide-field-of-view infrared sensors (>=4000x4000 detectors) have outpaced the current-art HWIL technology. Whether testing in optical projection or digital signal injection modes, current-art technologies for infrared scene projection, digital injection interfaces, and scene generation systems simply lack the required resolution and bandwidth. For example, the L3 Cincinnati Electronics ultra-high resolution MWIR Camera deployed in some UAV reconnaissance systems features 16MP resolution at 60Hz, while the current upper limit of IR emitter arrays is ~1MP, and single-channel dual-link DVI throughput of COTs graphics cards is limited to 2560x1580 pixels at 60Hz. Moreover, there are significant challenges in real-time, closed-loop, physics-based IR scene generation for large format FPAs, including the size and spatial detail required for very large area terrains, and multi - channel low-latency synchronization to achieve the required bandwidth. In this paper, the author's team presents some of their ongoing research and technical approaches toward HWIL testing of large-format FPAs with wide-FOV optics. One approach presented is a hybrid projection/injection design, where digital signal injection is used to augment the resolution of current-art IRSPs, utilizing a multi-channel, high-fidelity physics-based IR scene simulator in conjunction with a novel image composition hardware unit, to allow projection in the foveal region of the sensor, while non-foveal regions of the sensor array are simultaneously stimulated via direct injection into the post-detector electronics.

  18. Supporting scalability and flexibility in a distributed management platform

    NASA Astrophysics Data System (ADS)

    Jardin, P.

    1996-06-01

    The TeMIP management platform was developed to manage very large distributed systems such as telecommunications networks. The management of these networks imposes a number of fairly stringent requirements including the partitioning of the network, division of work based on skills and target system types and the ability to adjust the functions to specific operational requirements. This requires the ability to cluster managed resources into domains that are totally defined at runtime based on operator policies. This paper addresses some of the issues that must be addressed in order to add a dynamic dimension to a management solution.

  19. Study of electrical and chemical propulsion systems for auxiliary propulsion of large space systems, volume 2

    NASA Technical Reports Server (NTRS)

    Smith, W. W.

    1981-01-01

    The five major tasks of the program are reported. Task 1 is a literature search followed by selection and definition of seven generic spacecraft classes. Task 2 covers the determination and description of important disturbance effects. Task 3 applies the disturbances to the generic spacecraft and adds maneuver and stationkeeping functions to define total auxiliary propulsion systems requirements for control. The important auxiliary propulsion system characteristics are identified and sensitivities to control functions and large space system characteristics determined. In Task 4, these sensitivities are quantified and the optimum auxiliary propulsion system characteristics determined. Task 5 compares the desired characteristics with those available for both electrical and chemical auxiliary propulsion systems to identify the directions technology advances should take.

  20. Overview of ASC Capability Computing System Governance Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott W.

    This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a priority-driven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.

  1. Grid-connected PV systems - How and where they fit

    NASA Astrophysics Data System (ADS)

    Thomas, M. G.; Jones, G. J.

    The use of grid-connected photovoltaic systems requires substantial improvements in system economics. By integrating anticipated improvements in economics with consumer needs and perceptions, the various potential applications have been order-ranked. Third-party ownership of large systems appears to have the largest potential, residential has a modest potential, and the intermediate dedicated-load application potential appears to be small.

  2. Structural design of the Large Deployable Reflector (LDR)

    NASA Technical Reports Server (NTRS)

    Satter, Celeste M.; Lou, Michael C.

    1991-01-01

    An integrated Large Deployable Reflector (LDR) analysis model was developed to enable studies of system responses to the mechanical and thermal disturbances anticipated during on-orbit operations. Functional requirements of the major subsystems of the LDR are investigated, design trades are conducted, and design options are proposed. System mass and inertia properties are computed in order to estimate environmental disturbances, and in the sizing of control system hardware. Scaled system characteristics are derived for use in evaluating launch capabilities and achievable orbits. It is concluded that a completely passive 20-m primary appears feasible for the LDR from the standpoint of both mechanical vibration and thermal distortions.

  3. Structural design of the Large Deployable Reflector (LDR)

    NASA Astrophysics Data System (ADS)

    Satter, Celeste M.; Lou, Michael C.

    1991-09-01

    An integrated Large Deployable Reflector (LDR) analysis model was developed to enable studies of system responses to the mechanical and thermal disturbances anticipated during on-orbit operations. Functional requirements of the major subsystems of the LDR are investigated, design trades are conducted, and design options are proposed. System mass and inertia properties are computed in order to estimate environmental disturbances, and in the sizing of control system hardware. Scaled system characteristics are derived for use in evaluating launch capabilities and achievable orbits. It is concluded that a completely passive 20-m primary appears feasible for the LDR from the standpoint of both mechanical vibration and thermal distortions.

  4. Supporting Knowledge Transfer in IS Deployment Projects

    NASA Astrophysics Data System (ADS)

    Schönström, Mikael

    To deploy new information systems is an expensive and complex task, and does seldom result in successful usage where the system adds strategic value to the firm (e.g. Sharma et al. 2003). It has been argued that innovation diffusion is a knowledge integration problem (Newell et al. 2000). Knowledge about business processes, deployment processes, information systems and technology are needed in a large-scale deployment of a corporate IS. These deployments can therefore to a large extent be argued to be a knowledge management (KM) problem. An effective deployment requires that knowledge about the system is effectively transferred to the target organization (Ko et al. 2005).

  5. The Space Station as a Construction Base for Large Space Structures

    NASA Technical Reports Server (NTRS)

    Gates, R. M.

    1985-01-01

    The feasibility of using the Space Station as a construction site for large space structures is examined. An overview is presented of the results of a program entitled Definition of Technology Development Missions (TDM's) for Early Space Stations - Large Space Structures. The definition of LSS technology development missions must be responsive to the needs of future space missions which require large space structures. Long range plans for space were assembled by reviewing Space System Technology Models (SSTM) and other published sources. Those missions which will use large space structures were reviewed to determine the objectives which must be demonstrated by technology development missions. The three TDM's defined during this study are: (1) a construction storage/hangar facility; (2) a passive microwave radiometer; and (3) a precision optical system.

  6. A superconducting large-angle magnetic suspension

    NASA Technical Reports Server (NTRS)

    Downer, James; Goldie, James; Torti, Richard

    1991-01-01

    The component technologies were developed required for an advanced control moment gyro (CMG) type of slewing actuator for large payloads. The key component of the CMG is a large-angle magnetic suspension (LAMS). The LAMS combines the functions of the gimbal structure, torque motors, and rotor bearings of a CMG. The LAMS uses a single superconducting source coil and an array of cryoresistive control coils to produce a specific output torque more than an order of magnitude greater than conventional devices. The designed and tested LAMS system is based around an available superconducting solenoid, an array of twelve room-temperature normal control coils, and a multi-input, multi-output control system. The control laws were demonstrated for stabilizing and controlling the LAMS system.

  7. Data-driven indexing mechanism for the recognition of polyhedral objects

    NASA Astrophysics Data System (ADS)

    McLean, Stewart; Horan, Peter; Caelli, Terry M.

    1992-02-01

    This paper is concerned with the problem of searching large model databases. To date, most object recognition systems have concentrated on the problem of matching using simple searching algorithms. This is quite acceptable when the number of object models is small. However, in the future, general purpose computer vision systems will be required to recognize hundreds or perhaps thousands of objects and, in such circumstances, efficient searching algorithms will be needed. The problem of searching a large model database is one which must be addressed if future computer vision systems are to be at all effective. In this paper we present a method we call data-driven feature-indexed hypothesis generation as one solution to the problem of searching large model databases.

  8. NASA Integrated Vehicle Health Management (NIVHM) A New Simulation Architecture. Part I; An Investigation

    NASA Technical Reports Server (NTRS)

    Sheppard, Gene

    2005-01-01

    The overall objective of this research is to explore the development of a new architecture for simulating a vehicle health monitoring system in support of NASA s on-going Integrated Vehicle Health Monitoring (IVHM) initiative. As discussed in NASA MSFC s IVHM workshop on June 29-July 1, 2004, a large number of sensors will be required for a robust IVHM system. The current simulation architecture is incapable of simulating the large number of sensors required for IVHM. Processing the data from the sensors into a format that a human operator can understand and assimilate in a timely manner will require a paradigm shift. Data from a single sensor is, at best, suspect and in order to overcome this deficiency, redundancy will be required for tomorrow s sensors. The sensor technology of tomorrow will allow for the placement of thousands of sensors per square inch. The major obstacle to overcome will then be how we can mitigate the torrent of data from raw sensor data to useful information to computer assisted decisionmaking.

  9. Investigation of noise insensitive electronic circuits for automotive applications with particular regard to MOS circuits

    NASA Astrophysics Data System (ADS)

    Gorille, I.

    1980-11-01

    The application of MOS switching circuits of high complexity in essential automobile systems, such as ignition and injection, was investigated. A bipolar circuit technology, current hogging logic (CHL), was compared to MOS technologies for its competitiveness. The functional requirements of digital automotive systems can only be met by technologies allowing large packing densities and medium speeds. The properties of n-MOS and CMOS are promising whereas the electrical power needed by p-MOS circuits is in general prohibitively large.

  10. Satellite power system (SPS) concept definition study. Volume 3: Experimental verification definition

    NASA Technical Reports Server (NTRS)

    Hanley, G. M.

    1980-01-01

    An evolutionary Satellite Power Systems development plan was prepared. Planning analysis was directed toward the evolution of a scenario that met the stated objectives, was technically possible and economically attractive, and took into account constraining considerations, such as requirements for very large scale end-to-end demonstration in a compressed time frame, the relative cost/technical merits of ground testing versus space testing, and the need for large mass flow capability to low Earth orbit and geosynchronous orbit at reasonable cost per pound.

  11. On the resilience of helical magnetic fields to turbulent diffusion and the astrophysical implications

    NASA Astrophysics Data System (ADS)

    Blackman, Eric G.; Subramanian, Kandaswamy

    2013-02-01

    The extent to which large-scale magnetic fields are susceptible to turbulent diffusion is important for interpreting the need for in situ large-scale dynamos in astrophysics and for observationally inferring field strengths compared to kinetic energy. By solving coupled evolution equations for magnetic energy and magnetic helicity in a system initialized with isotropic turbulence and an arbitrarily helical large-scale field, we quantify the decay rate of the latter for a bounded or periodic system. The magnetic energy associated with the non-helical large-scale field decays at least as fast as the kinematically estimated turbulent diffusion rate, but the decay rate of the helical part depends on whether the ratio of its magnetic energy to the turbulent kinetic energy exceeds a critical value given by M1, c = (k1/k2)2, where k1 and k2 are the wavenumbers of the large and forcing scales. Turbulently diffusing helical fields to small scales while conserving magnetic helicity requires a rapid increase in total magnetic energy. As such, only when the helical field is subcritical can it so diffuse. When supercritical, it decays slowly, at a rate determined by microphysical dissipation even in the presence of macroscopic turbulence. In effect, turbulent diffusion of such a large-scale helical field produces small-scale helicity whose amplification abates further turbulent diffusion. Two curious implications are that (1) standard arguments supporting the need for in situ large-scale dynamos based on the otherwise rapid turbulent diffusion of large-scale fields require re-thinking since only the large-scale non-helical field is so diffused in a closed system. Boundary terms could however provide potential pathways for rapid change of the large-scale helical field. (2) Since M1, c ≪ 1 for k1 ≪ k2, the presence of long-lived ordered large-scale helical fields as in extragalactic jets do not guarantee that the magnetic field dominates the kinetic energy.

  12. System Guidelines for EMC Safety-Critical Circuits: Design, Selection, and Margin Demonstration

    NASA Technical Reports Server (NTRS)

    Lawton, R. M.

    1996-01-01

    Demonstration of required safety margins on critical electrical/electronic circuits in large complex systems has become an implementation and cost problem. These margins are the difference between the activation level of the circuit and the electrical noise on the circuit in the actual operating environment. This document discusses the origin of the requirement and gives a detailed process flow for the identification of the system electromagnetic compatibility (EMC) critical circuit list. The process flow discusses the roles of engineering disciplines such as systems engineering, safety, and EMC. Design and analysis guidelines are provided to assist the designer in assuring the system design has a high probability of meeting the margin requirements. Examples of approaches used on actual programs (Skylab and Space Shuttle Solid Rocket Booster) are provided to show how variations of the approach can be used successfully.

  13. Holographic Airborne Rotating Lidar Instrument Experiment (HARLIE)

    NASA Technical Reports Server (NTRS)

    Schwemmer, Geary K.

    1998-01-01

    Scanning holographic lidar receivers are currently in use in two operational lidar systems, PHASERS (Prototype Holographic Atmospheric Scanner for Environmental Remote Sensing) and now HARLIE (Holographic Airborne Rotating Lidar Instrument Experiment). These systems are based on volume phase holograms made in dichromated gelatin (DCG) sandwiched between 2 layers of high quality float glass. They have demonstrated the practical application of this technology to compact scanning lidar systems at 532 and 1064 nm wavelengths, the ability to withstand moderately high laser power and energy loading, sufficient optical quality for most direct detection systems, overall efficiencies rivaling conventional receivers, and the stability to last several years under typical lidar system environments. Their size and weight are approximately half of similar performing scanning systems using reflective optics. The cost of holographic systems will eventually be lower than the reflective optical systems depending on their degree of commercialization. There are a number of applications that require or can greatly benefit from a scanning capability. Several of these are airborne systems, which either use focal plane scanning, as in the Laser Vegetation Imaging System or use primary aperture scanning, as in the Airborne Oceanographic Lidar or the Large Aperture Scanning Airborne Lidar. The latter class requires a large clear aperture opening or window in the aircraft. This type of system can greatly benefit from the use of scanning transmission holograms of the HARLIE type because the clear aperture required is only about 25% larger than the collecting aperture as opposed to 200-300% larger for scan angles of 45 degrees off nadir.

  14. A Novel Low-Power, High-Performance, Zero-Maintenance Closed-Path Trace Gas Eddy Covariance System with No Water Vapor Dilution or Spectroscopic Corrections

    NASA Astrophysics Data System (ADS)

    Sargent, S.; Somers, J. M.

    2015-12-01

    Trace-gas eddy covariance flux measurement can be made with open-path or closed-path analyzers. Traditional closed-path trace-gas analyzers use multipass absorption cells that behave as mixing volumes, requiring high sample flow rates to achieve useful frequency response. The high sample flow rate and the need to keep the multipass cell extremely clean dictates the use of a fine-pore filter that may clog quickly. A large-capacity filter cannot be used because it would degrade the EC system frequency response. The high flow rate also requires a powerful vacuum pump, which will typically consume on the order of 1000 W. The analyzer must measure water vapor for spectroscopic and dilution corrections. Open-path analyzers are available for methane, but not for nitrous oxide. The currently available methane analyzers have low power consumption, but are very large. Their large size degrades frequency response and disturbs the air flow near the sonic anemometer. They require significant maintenance to keep the exposed multipass optical surfaces clean. Water vapor measurements for dilution and spectroscopic corrections require a separate water vapor analyzer. A new closed-path eddy covariance system for measuring nitrous oxide or methane fluxes provides an elegant solution. The analyzer (TGA200A, Campbell Scientific, Inc.) uses a thermoelectrically-cooled interband cascade laser. Its small sample-cell volume and unique sample-cell configuration (200 ml, 1.5 m single pass) provide excellent frequency response with a low-power scroll pump (240 W). A new single-tube Nafion® dryer removes most of the water vapor, and attenuates fluctuations in the residual water vapor. Finally, a vortex intake assembly eliminates the need for an intake filter without adding volume that would degrade system frequency response. Laboratory testing shows the system attenuates the water vapor dilution term by more than 99% and achieves a half-power band width of 3.5 Hz.

  15. Optimization of hybrid power system composed of SMES and flywheel MG for large pulsed load

    NASA Astrophysics Data System (ADS)

    Niiyama, K.; Yagai, T.; Tsuda, M.; Hamajima, T.

    2008-09-01

    A superconducting magnetic storage system (SMES) has some advantages such as rapid large power response and high storage efficiency which are superior to other energy storage systems. A flywheel motor generator (FWMG) has large scaled capacity and high reliability, and hence is broadly utilized for a large pulsed load, while it has comparatively low storage efficiency due to high mechanical loss compared with SMES. A fusion power plant such as International Thermo-Nuclear Experimental Reactor (ITER) requires a large and long pulsed load which causes a frequency deviation in a utility power system. In order to keep the frequency within an allowable deviation, we propose a hybrid power system for the pulsed load, which equips the SMES and the FWMG with the utility power system. We evaluate installation cost and frequency control performance of three power systems combined with energy storage devices; (i) SMES with the utility power, (ii) FWMG with the utility power, (iii) both SMES and FWMG with the utility power. The first power system has excellent frequency power control performance but its installation cost is high. The second system has inferior frequency control performance but its installation cost is the lowest. The third system has good frequency control performance and its installation cost is attained lower than the first power system by adjusting the ratio between SMES and FWMG.

  16. Method for large and rapid terahertz imaging

    DOEpatents

    Williams, Gwyn P.; Neil, George R.

    2013-01-29

    A method of large-scale active THz imaging using a combination of a compact high power THz source (>1 watt), an optional optical system, and a camera for the detection of reflected or transmitted THz radiation, without the need for the burdensome power source or detector cooling systems required by similar prior art such devices. With such a system, one is able to image, for example, a whole person in seconds or less, whereas at present, using low power sources and scanning techniques, it takes several minutes or even hours to image even a 1 cm.times.1 cm area of skin.

  17. Verification of Space Station Secondary Power System Stability Using Design of Experiment

    NASA Technical Reports Server (NTRS)

    Karimi, Kamiar J.; Booker, Andrew J.; Mong, Alvin C.; Manners, Bruce

    1998-01-01

    This paper describes analytical methods used in verification of large DC power systems with applications to the International Space Station (ISS). Large DC power systems contain many switching power converters with negative resistor characteristics. The ISS power system presents numerous challenges with respect to system stability such as complex sources and undefined loads. The Space Station program has developed impedance specifications for sources and loads. The overall approach to system stability consists of specific hardware requirements coupled with extensive system analysis and testing. Testing of large complex distributed power systems is not practical due to size and complexity of the system. Computer modeling has been extensively used to develop hardware specifications as well as to identify system configurations for lab testing. The statistical method of Design of Experiments (DoE) is used as an analysis tool for verification of these large systems. DOE reduces the number of computer runs which are necessary to analyze the performance of a complex power system consisting of hundreds of DC/DC converters. DoE also provides valuable information about the effect of changes in system parameters on the performance of the system. DoE provides information about various operating scenarios and identification of the ones with potential for instability. In this paper we will describe how we have used computer modeling to analyze a large DC power system. A brief description of DoE is given. Examples using applications of DoE to analysis and verification of the ISS power system are provided.

  18. Intrusion-Tolerant Replication under Attack

    ERIC Educational Resources Information Center

    Kirsch, Jonathan

    2010-01-01

    Much of our critical infrastructure is controlled by large software systems whose participants are distributed across the Internet. As our dependence on these critical systems continues to grow, it becomes increasingly important that they meet strict availability and performance requirements, even in the face of malicious attacks, including those…

  19. Thermal protection system flight repair kit

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A thermal protection system (TPS) flight repair kit required for use on a flight of the Space Transportation System is defined. A means of making TPS repairs in orbit by the crew via extravehicular activity is discussed. A cure in place ablator, a precured ablator (large area application), and packaging design (containers for mixing and dispensing) for the TPS are investigated.

  20. Optically addressed ultra-wideband phased antenna array

    NASA Astrophysics Data System (ADS)

    Bai, Jian

    Demands for high data rate and multifunctional apertures from both civilian and military users have motivated development of ultra-wideband (UWB) electrically steered phased arrays. Meanwhile, the need for large contiguous frequency is pushing operation of radio systems into the millimeter-wave (mm-wave) range. Therefore, modern radio systems require UWB performance from VHF to mm-wave. However, traditional electronic systems suffer many challenges that make achieving these requirements difficult. Several examples includes: voltage controlled oscillators (VCO) cannot provide a tunable range of several octaves, distribution of wideband local oscillator signals undergo high loss and dispersion through RF transmission lines, and antennas have very limited bandwidth or bulky sizes. Recently, RF photonics technology has drawn considerable attention because of its advantages over traditional systems, with the capability of offering extreme power efficiency, information capacity, frequency agility, and spatial beam diversity. A hybrid RF photonic communication system utilizing optical links and an RF transducer at the antenna potentially provides ultra-wideband data transmission, i.e., over 100 GHz. A successful implementation of such an optically addressed phased array requires addressing several key challenges. Photonic generation of an RF source with over a seven-octave bandwidth has been demonstrated in the last few years. However, one challenge which still remains is how to convey phased optical signals to downconversion modules and antennas. Therefore, a feed network with phase sweeping capability and low excessive phase noise needs to be developed. Another key challenge is to develop an ultra-wideband array antenna. Modern frontends require antennas to be compact, planar, and low-profile in addition to possessing broad bandwidth, conforming to stringent space, weight, cost, and power constraints. To address these issues, I will study broadband and miniaturization techniques for both single and array antennas. In addition, a prototype transmitting phased array system is developed and shown to demonstrate large bandwidth as well as a beam steering capability. The architecture of this system can be further developed to a large-scale array at higher frequencies such as mm-wave. This solution serves as a candidate for UWB multifunctional frontends.

  1. Network issues for large mass storage requirements

    NASA Technical Reports Server (NTRS)

    Perdue, James

    1992-01-01

    File Servers and Supercomputing environments need high performance networks to balance the I/O requirements seen in today's demanding computing scenarios. UltraNet is one solution which permits both high aggregate transfer rates and high task-to-task transfer rates as demonstrated in actual tests. UltraNet provides this capability as both a Server-to-Server and Server-to-Client access network giving the supercomputing center the following advantages highest performance Transport Level connections (to 40 MBytes/sec effective rates); matches the throughput of the emerging high performance disk technologies, such as RAID, parallel head transfer devices and software striping; supports standard network and file system applications using SOCKET's based application program interface such as FTP, rcp, rdump, etc.; supports access to the Network File System (NFS) and LARGE aggregate bandwidth for large NFS usage; provides access to a distributed, hierarchical data server capability using DISCOS UniTree product; supports file server solutions available from multiple vendors, including Cray, Convex, Alliant, FPS, IBM, and others.

  2. Demonstration of Vibrational Braille Code Display Using Large Displacement Micro-Electro-Mechanical Systems Actuators

    NASA Astrophysics Data System (ADS)

    Watanabe, Junpei; Ishikawa, Hiroaki; Arouette, Xavier; Matsumoto, Yasuaki; Miki, Norihisa

    2012-06-01

    In this paper, we present a vibrational Braille code display with large-displacement micro-electro-mechanical systems (MEMS) actuator arrays. Tactile receptors are more sensitive to vibrational stimuli than to static ones. Therefore, when each cell of the Braille code vibrates at optimal frequencies, subjects can recognize the codes more efficiently. We fabricated a vibrational Braille code display that used actuators consisting of piezoelectric actuators and a hydraulic displacement amplification mechanism (HDAM) as cells. The HDAM that encapsulated incompressible liquids in microchambers with two flexible polymer membranes could amplify the displacement of the MEMS actuator. We investigated the voltage required for subjects to recognize Braille codes when each cell, i.e., the large-displacement MEMS actuator, vibrated at various frequencies. Lower voltages were required at vibration frequencies higher than 50 Hz than at vibration frequencies lower than 50 Hz, which verified that the proposed vibrational Braille code display is efficient by successfully exploiting the characteristics of human tactile receptors.

  3. Operations management system

    NASA Technical Reports Server (NTRS)

    Brandli, A. E.; Eckelkamp, R. E.; Kelly, C. M.; Mccandless, W.; Rue, D. L.

    1990-01-01

    The objective of an operations management system is to provide an orderly and efficient method to operate and maintain aerospace vehicles. Concepts are described for an operations management system and the key technologies are highlighted which will be required if this capability is brought to fruition. Without this automation and decision aiding capability, the growing complexity of avionics will result in an unmanageable workload for the operator, ultimately threatening mission success or survivability of the aircraft or space system. The key technologies include expert system application to operational tasks such as replanning, equipment diagnostics and checkout, global system management, and advanced man machine interfaces. The economical development of operations management systems, which are largely software, will require advancements in other technological areas such as software engineering and computer hardware.

  4. Integration of Mirror Design with Suspension System Using NASA's New Mirror Modeling Software

    NASA Technical Reports Server (NTRS)

    Arnold, William R., Sr.; Bevan, Ryan M.; Stahl, H. Philip

    2013-01-01

    Advances in mirror fabrication are making very large space based telescopes possible. In many applications, only monolithic mirrors can meet the performance requirements. The existing and near-term planned heavy launch vehicles place a premium on lowest possible mass, and then available payload shroud sizes limit near term designs to 4 meter class mirrors. Practical 8 meter class and beyond designs could encourage planners to include larger shrouds, if it can be proven that such mirrors can be manufactured. These two factors, lower mass and larger mirrors, present the classic optimization problem. There is a practical upper limit to how large of a mirror can be supported by a purely kinematic mount system handling both operational and launch loads. This paper shows how the suspension system and mirror blank need to be designed simultaneously. We will also explore the concepts of auxiliary support systems which act only during launch and disengage on orbit. We will define required characteristics of these systems and show how they can substantially reduce the mirror mass.

  5. Integration of Mirror Design with Suspension System using NASA's New Mirror Modeling Software

    NASA Technical Reports Server (NTRS)

    Arnold,William R., Sr.; Bevan, Ryan M.; Stahl, Philip

    2013-01-01

    Advances in mirror fabrication are making very large space based telescopes possible. In many applications, only monolithic mirrors can meet the performance requirements. The existing and near-term planned heavy launch vehicles place a premium on lowest possible mass, and then available payload shroud sizes limit near term designs to 4 meter class mirrors. Practical 8 meter class and beyond designs could encourage planners to include larger shrouds, if it can be proven that such mirrors can be manufactured. These two factors, lower mass and larger mirrors, present the classic optimization problem. There is a practical upper limit to how large of a mirror can be supported by a purely kinematic mount system handling both operational and launch loads. This paper shows how the suspension system and mirror blank need to be designed simultaneously. We will also explore the concepts of auxiliary support systems which act only during launch and disengage on orbit. We will define required characteristics of these systems and show how they can substantially reduce the mirror mass.

  6. Technical Requirements For Reactors To Be Deployed Internationally For the Global Nuclear Energy Partnership

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ingersoll, Daniel T

    2007-01-01

    Technical Requirements For Reactors To Be Deployed Internationally For the Global Nuclear Energy Partnership Robert Price U.S. Department of Energy, 1000 Independence Ave, SW, Washington, DC 20585, Daniel T. Ingersoll Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, TN 37831-6162, INTRODUCTION The Global Nuclear Energy Partnership (GNEP) seeks to create an international regime to support large-scale growth in the worldwide use of nuclear energy. Fully meeting the GNEP vision may require the deployment of thousands of reactors in scores of countries, many of which do not use nuclear energy currently. Some of these needs will be met by large-scalemore » Generation III and III+ reactors (>1000 MWe) and Generation IV reactors when they are available. However, because many developing countries have small and immature electricity grids, the currently available Generation III(+) reactors may be unsuitable since they are too large, too expensive, and too complex. Therefore, GNEP envisions new types of reactors that must be developed for international deployment that are "right sized" for the developing countries and that are based on technologies, designs, and policies focused on reducing proliferation risk. The first step in developing such systems is the generation of technical requirements that will ensure that the systems meet both the GNEP policy goals and the power needs of the recipient countries. REQUIREMENTS Reactor systems deployed internationally within the GNEP context must meet a number of requirements similar to the safety, reliability, economics, and proliferation goals established for the DOE Generation IV program. Because of the emphasis on deployment to nonnuclear developing countries, the requirements will be weighted differently than with Generation IV, especially regarding safety and non-proliferation goals. Also, the reactors should be sized for market conditions in developing countries where energy demand per capita, institutional maturity and industrial infrastructure vary considerably, and must utilize fuel that is compatible with the fuel recycle technologies being developed by GNEP. Arrangements are already underway to establish Working Groups jointly with Japan and Russia to develop requirements for reactor systems. Additional bilateral and multilateral arrangements are expected as GNEP progresses. These Working Groups will be instrumental in establishing an international consensus on reactor system requirements. GNEP CERTIFICATION After establishing an accepted set of requirements for new reactors that are deployed internationally, a mechanism is needed that allows capable countries to continue to market their reactor technologies and services while assuring that they are compatible with GNEP goals and technologies. This will help to preserve the current system of open, commercial competition while steering the international community to meet common policy goals. The proposed vehicle to achieve this is the concept of GNEP Certification. Using objective criteria derived from the technical requirements in several key areas such as safety, security, non-proliferation, and safeguards, reactor designs could be evaluated and then certified if they meet the criteria. This certification would ensure that reactor designs meet internationally approved standards and that the designs are compatible with GNEP assured fuel services. SUMMARY New "right sized" power reactor systems will need to be developed and deployed internationally to fully achieve the GNEP vision of an expanded use of nuclear energy world-wide. The technical requirements for these systems are being developed through national and international Working Groups. The process is expected to culminate in a new GNEP Certification process that enables commercial competition while ensuring that the policy goals of GNEP are adequately met.« less

  7. Accommodation-based liquid crystal adaptive optics system for large ocular aberration correction.

    PubMed

    Mu, Quanquan; Cao, Zhaoliang; Li, Chao; Jiang, Baoguang; Hu, Lifa; Xuan, Li

    2008-12-15

    According to ocular aberration property and liquid crystal (LC) corrector characteristics, we calculated the minimum pixel demand of the LC corrector used for compensating large ocular aberrations. Then, an accommodation based optical configuration was introduced to reduce the demand. Based on this an adaptive optics (AO) retinal imaging system was built. Subjects with different defocus and astigmatism were tested to prove this. For myopia lower than 5D it performs well. When myopia is as large as 8D the accommodation error increased to nearly 3D, which requires the LC corrector to have 667 x 667 pixels to get a well-corrected image.

  8. The production of multiprotein complexes in insect cells using the baculovirus expression system.

    PubMed

    Abdulrahman, Wassim; Radu, Laura; Garzoni, Frederic; Kolesnikova, Olga; Gupta, Kapil; Osz-Papai, Judit; Berger, Imre; Poterszman, Arnaud

    2015-01-01

    The production of a homogeneous protein sample in sufficient quantities is an essential prerequisite not only for structural investigations but represents also a rate-limiting step for many functional studies. In the cell, a large fraction of eukaryotic proteins exists as large multicomponent assemblies with many subunits, which act in concert to catalyze specific activities. Many of these complexes cannot be obtained from endogenous source material, so recombinant expression and reconstitution are then required to overcome this bottleneck. This chapter describes current strategies and protocols for the efficient production of multiprotein complexes in large quantities and of high quality, using the baculovirus/insect cell expression system.

  9. Friction Stir Welding of Large Scale Cryogenic Tanks for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Russell, Carolyn; Ding, R. Jeffrey

    1998-01-01

    The Marshall Space Flight Center (MSFC) has established a facility for the joining of large-scale aluminum cryogenic propellant tanks using the friction stir welding process. Longitudinal welds, approximately five meters in length, have been made by retrofitting an existing vertical fusion weld system, designed to fabricate tank barrel sections ranging from two to ten meters in diameter. The structural design requirements of the tooling, clamping and travel system will be described in this presentation along with process controls and real-time data acquisition developed for this application. The approach to retrofitting other large welding tools at MSFC with the friction stir welding process will also be discussed.

  10. Strategic positioning for nursing excellence in health systems: insights from chief nursing executives.

    PubMed

    Arnold, Lauren; Drenkard, Karen; Ela, Sue; Goedken, Jolene; Hamilton, Connie; Harris, Carla; Holecek, Nancy; White, Maureen

    2006-01-01

    The emergence of health systems as a dominant structure for organizing healthcare has stimulated the development of health system chief nursing executive (CNE) positions. These positions have large spans of control, requiring CNEs to balance a wide range of responsibilities, making them accountable for fiscal management, quality of care, compliance, and contributing to organizational growth. As such the CNE is required to use principles of distributive justice to guide priority setting and decision making. This review addresses important questions about CNE system integration strategies, strategic priorities, and organizational positioning as they attempt to fulfill their ethical responsibilities to patients and the nurses they serve.

  11. The impact of image storage organization on the effectiveness of PACS.

    PubMed

    Hindel, R

    1990-11-01

    Picture archiving communication system (PACS) requires efficient handling of large amounts of data. Mass storage systems are cost effective but slow, while very fast systems, like frame buffers and parallel transfer disks, are expensive. The image traffic can be divided into inbound traffic generated by diagnostic modalities and outbound traffic into workstations. At the contact points with medical professionals, the responses must be fast. Archiving, on the other hand, can employ slower but less expensive storage systems, provided that the primary activities are not impeded. This article illustrates a segmentation architecture meeting these requirements based on a clearly defined PACS concept.

  12. Automatic vehicle monitoring systems study. Report of phase O. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A set of planning guidelines is presented to help law enforcement agencies and vehicle fleet operators decide which automatic vehicle monitoring (AVM) system could best meet their performance requirements. Improvements in emergency response times and resultant cost benefits obtainable with various operational and planned AVM systems may be synthesized and simulated by means of special computer programs for model city parameters applicable to small, medium, and large urban areas. Design characteristics of various AVM systems and the implementation requirements are illustrated and cost estimated for the vehicles, the fixed sites, and the base equipments. Vehicle location accuracies for different RF links and polling intervals are analyzed.

  13. Composite sizing and ply orientation for stiffness requirements using a large finite element structural model

    NASA Technical Reports Server (NTRS)

    Radovcich, N. A.; Gentile, D. P.

    1989-01-01

    A NASTRAN bulk dataset preprocessor was developed to facilitate the integration of filamentary composite laminate properties into composite structural resizing for stiffness requirements. The NASCOMP system generates delta stiffness and delta mass matrices for input to the flutter derivative program. The flutter baseline analysis, derivative calculations, and stiffness and mass matrix updates are controlled by engineer defined processes under an operating system called CBUS. A multi-layered design variable grid system permits high fidelity resizing without excessive computer cost. The NASCOMP system uses ply layup drawings for basic input. The aeroelastic resizing for stiffness capability was used during an actual design exercise.

  14. Free-flying teleoperator requirements and conceptual design.

    NASA Technical Reports Server (NTRS)

    Onega, G. T.; Clingman, J. H.

    1973-01-01

    A teleoperator, as defined by NASA, is a remotely controlled cybernetic man-machine system designed to augment and extend man's sensory, manipulative, and cognitive capabilities. Teleoperator systems can fulfill an important function in the Space Shuttle program. They can retrieve automated satellites for refurbishment and reuse. Cargo can be transferred over short or large distances and orbital operations can be supported. A requirements analysis is discussed, giving attention to the teleoperator spacecraft, docking and stowage systems, display and controls, propulsion, guidance, navigation, control, the manipulators, the video system, the electrical power, and aspects of communication and data management. Questions of concept definition and evaluation are also examined.

  15. A comparison of database systems for XML-type data.

    PubMed

    Risse, Judith E; Leunissen, Jack A M

    2010-01-01

    In the field of bioinformatics interchangeable data formats based on XML are widely used. XML-type data is also at the core of most web services. With the increasing amount of data stored in XML comes the need for storing and accessing the data. In this paper we analyse the suitability of different database systems for storing and querying large datasets in general and Medline in particular. All reviewed database systems perform well when tested with small to medium sized datasets, however when the full Medline dataset is queried a large variation in query times is observed. There is not one system that is vastly superior to the others in this comparison and, depending on the database size and the query requirements, different systems are most suitable. The best all-round solution is the Oracle 11~g database system using the new binary storage option. Alias-i's Lingpipe is a more lightweight, customizable and sufficiently fast solution. It does however require more initial configuration steps. For data with a changing XML structure Sedna and BaseX as native XML database systems or MySQL with an XML-type column are suitable.

  16. Geometric quantification of features in large flow fields.

    PubMed

    Kendall, Wesley; Huang, Jian; Peterka, Tom

    2012-01-01

    Interactive exploration of flow features in large-scale 3D unsteady-flow data is one of the most challenging visualization problems today. To comprehensively explore the complex feature spaces in these datasets, a proposed system employs a scalable framework for investigating a multitude of characteristics from traced field lines. This capability supports the examination of various neighborhood-based geometric attributes in concert with other scalar quantities. Such an analysis wasn't previously possible because of the large computational overhead and I/O requirements. The system integrates visual analytics methods by letting users procedurally and interactively describe and extract high-level flow features. An exploration of various phenomena in a large global ocean-modeling simulation demonstrates the approach's generality and expressiveness as well as its efficacy.

  17. Using SysML for verification and validation planning on the Large Synoptic Survey Telescope (LSST)

    NASA Astrophysics Data System (ADS)

    Selvy, Brian M.; Claver, Charles; Angeli, George

    2014-08-01

    This paper provides an overview of the tool, language, and methodology used for Verification and Validation Planning on the Large Synoptic Survey Telescope (LSST) Project. LSST has implemented a Model Based Systems Engineering (MBSE) approach as a means of defining all systems engineering planning and definition activities that have historically been captured in paper documents. Specifically, LSST has adopted the Systems Modeling Language (SysML) standard and is utilizing a software tool called Enterprise Architect, developed by Sparx Systems. Much of the historical use of SysML has focused on the early phases of the project life cycle. Our approach is to extend the advantages of MBSE into later stages of the construction project. This paper details the methodology employed to use the tool to document the verification planning phases, including the extension of the language to accommodate the project's needs. The process includes defining the Verification Plan for each requirement, which in turn consists of a Verification Requirement, Success Criteria, Verification Method(s), Verification Level, and Verification Owner. Each Verification Method for each Requirement is defined as a Verification Activity and mapped into Verification Events, which are collections of activities that can be executed concurrently in an efficient and complementary way. Verification Event dependency and sequences are modeled using Activity Diagrams. The methodology employed also ties in to the Project Management Control System (PMCS), which utilizes Primavera P6 software, mapping each Verification Activity as a step in a planned activity. This approach leads to full traceability from initial Requirement to scheduled, costed, and resource loaded PMCS task-based activities, ensuring all requirements will be verified.

  18. ms_lims, a simple yet powerful open source laboratory information management system for MS-driven proteomics.

    PubMed

    Helsens, Kenny; Colaert, Niklaas; Barsnes, Harald; Muth, Thilo; Flikka, Kristian; Staes, An; Timmerman, Evy; Wortelkamp, Steffi; Sickmann, Albert; Vandekerckhove, Joël; Gevaert, Kris; Martens, Lennart

    2010-03-01

    MS-based proteomics produces large amounts of mass spectra that require processing, identification and possibly quantification before interpretation can be undertaken. High-throughput studies require automation of these various steps, and management of the data in association with the results obtained. We here present ms_lims (http://genesis.UGent.be/ms_lims), a freely available, open-source system based on a central database to automate data management and processing in MS-driven proteomics analyses.

  19. Space station orbit maintenance

    NASA Technical Reports Server (NTRS)

    Kaplan, D. I.; Jones, R. M.

    1983-01-01

    The orbit maintenance problem is examined for two low-earth-orbiting space station concepts - the large, manned Space Operations Center (SOC) and the smaller, unmanned Science and Applications Space Platform (SASP). Atmospheric drag forces are calculated, and circular orbit altitudes are selected to assure a 90 day decay period in the event of catastrophic propulsion system failure. Several thrusting strategies for orbit maintenance are discussed. Various chemical and electric propulsion systems for orbit maintenance are compared on the basis of propellant resupply requirements, power requirements, Shuttle launch costs, and technology readiness.

  20. Development of a light-weight, wind-turbine-rotor-based data acquisition system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berg, D.E.; Rumsey, M.; Robertson, P.

    1997-12-01

    Wind-energy researchers at Sandia National Laboratories (SNL) and the National Renewable Energy Laboratory (NREL) are developing a new, light-weight, modular system capable of acquiring long-term, continuous time-series data from current-generation small or large, dynamic wind-turbine rotors. Meetings with wind-turbine research personnel at NREL and SNL resulted in a list of the major requirements that the system must meet. Initial attempts to locate a commercial system that could meet all of these requirements were not successful, but some commercially available data acquisition and radio/modem subsystems that met many of the requirements were identified. A time synchronization subsystem and a programmable logicmore » device subsystem to integrate the functions of the data acquisition, the radio/modem, and the time synchronization subsystems and to communicate with the user have been developed at SNL. This paper presents the data system requirements, describes the four major subsystems comprising the system, summarizes the current status of the system, and presents the current plans for near-term development of hardware and software.« less

  1. Rapid quantification of vesicle concentration for DOPG/DOPC and Cardiolipin/DOPC mixed lipid systems of variable composition.

    PubMed

    Elmer-Dixon, Margaret M; Bowler, Bruce E

    2018-05-19

    A novel approach to quantify mixed lipid systems is described. Traditional approaches to lipid vesicle quantification are time consuming, require large amounts of material and are destructive. We extend our recently described method for quantification of pure lipid systems to mixed lipid systems. The method only requires a UV-Vis spectrometer and does not destroy sample. Mie scattering data from absorbance measurements are used as input into a Matlab program to calculate the total vesicle concentration and the concentrations of each lipid in the mixed lipid system. The technique is fast and accurate, which is essential for analytical lipid binding experiments. Copyright © 2018. Published by Elsevier Inc.

  2. Principles for system level electrochemistry

    NASA Technical Reports Server (NTRS)

    Thaller, L. H.

    1986-01-01

    The higher power and higher voltage levels anticipated for future space missions have required a careful review of the techniques currently in use to preclude battery problems that are related to the dispersion characteristics of the individual cells. Not only are the out-of-balance problems accentuated in these larger systems, but the thermal management considerations also require a greater degree of accurate design. Newer concepts which employ active cooling techniques are being developed which permit higher rates of discharge and tighter packing densities for the electrochemical components. This paper will put forward six semi-independent principles relating to battery systems. These principles will progressively address cell, battery and finally system related aspects of large electrochemical storage systems.

  3. The Altitude Wind Tunnel (AWT): A unique facility for propulsion system and adverse weather testing

    NASA Technical Reports Server (NTRS)

    Chamberlin, R.

    1985-01-01

    A need has arisen for a new wind tunnel facility with unique capabilities for testing propulsion systems and for conducting research in adverse weather conditions. New propulsion system concepts, new aircraft configurations with an unprecedented degree of propulsion system/aircraft integration, and requirements for aircraft operation in adverse weather dictate the need for a new test facility. Required capabilities include simulation of both altitude pressure and temperature, large size, full subsonic speed range, propulsion system operation, and weather simulation (i.e., icing, heavy rain). A cost effective rehabilitation of the NASA Lewis Research Center's Altitude Wind Tunnel (AWT) will provide a facility with all these capabilities.

  4. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3) Coupling large-scale computing and data systems to scientific and engineering instruments (e.g., realtime interaction with experiments through real-time data analysis and interpretation presented to the experimentalist in ways that allow direct interaction with the experiment (instead of just with instrument control); (5) Highly interactive, augmented reality and virtual reality remote collaborations (e.g., Ames / Boeing Remote Help Desk providing field maintenance use of coupled video and NDI to a remote, on-line airframe structures expert who uses this data to index into detailed design databases, and returns 3D internal aircraft geometry to the field); (5) Single computational problems too large for any single system (e.g. the rotocraft reference calculation). Grids also have the potential to provide pools of resources that could be called on in extraordinary / rapid response situations (such as disaster response) because they can provide common interfaces and access mechanisms, standardized management, and uniform user authentication and authorization, for large collections of distributed resources (whether or not they normally function in concert). IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: the scientist / design engineer whose primary interest is problem solving (e.g. determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user is the tool designer: the computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. The results of the analysis of the needs of these two types of users provides a broad set of requirements that gives rise to a general set of required capabilities. The IPG project is intended to address all of these requirements. In some cases the required computing technology exists, and in some cases it must be researched and developed. The project is using available technology to provide a prototype set of capabilities in a persistent distributed computing testbed. Beyond this, there are required capabilities that are not immediately available, and whose development spans the range from near-term engineering development (one to two years) to much longer term R&D (three to six years). Additional information is contained in the original.

  5. Challenges in Developing Models Describing Complex Soil Systems

    NASA Astrophysics Data System (ADS)

    Simunek, J.; Jacques, D.

    2014-12-01

    Quantitative mechanistic models that consider basic physical, mechanical, chemical, and biological processes have the potential to be powerful tools to integrate our understanding of complex soil systems, and the soil science community has often called for models that would include a large number of these diverse processes. However, once attempts have been made to develop such models, the response from the community has not always been overwhelming, especially after it discovered that these models are consequently highly complex, requiring not only a large number of parameters, not all of which can be easily (or at all) measured and/or identified, and which are often associated with large uncertainties, but also requiring from their users deep knowledge of all/most of these implemented physical, mechanical, chemical and biological processes. Real, or perceived, complexity of these models then discourages users from using them even for relatively simple applications, for which they would be perfectly adequate. Due to the nonlinear nature and chemical/biological complexity of the soil systems, it is also virtually impossible to verify these types of models analytically, raising doubts about their applicability. Code inter-comparisons, which is then likely the most suitable method to assess code capabilities and model performance, requires existence of multiple models of similar/overlapping capabilities, which may not always exist. It is thus a challenge not only to developed models describing complex soil systems, but also to persuade the soil science community in using them. As a result, complex quantitative mechanistic models are still an underutilized tool in soil science research. We will demonstrate some of the challenges discussed above on our own efforts in developing quantitative mechanistic models (such as HP1/2) for complex soil systems.

  6. A Multi-Robot Sense-Act Approach to Lead to a Proper Acting in Environmental Incidents

    PubMed Central

    Conesa-Muñoz, Jesús; Valente, João; del Cerro, Jaime; Barrientos, Antonio; Ribeiro, Angela

    2016-01-01

    Many environmental incidents affect large areas, often in rough terrain constrained by natural obstacles, which makes intervention difficult. New technologies, such as unmanned aerial vehicles, may help address this issue due to their suitability to reach and easily cover large areas. Thus, unmanned aerial vehicles may be used to inspect the terrain and make a first assessment of the affected areas; however, nowadays they do not have the capability to act. On the other hand, ground vehicles rely on enough power to perform the intervention but exhibit more mobility constraints. This paper proposes a multi-robot sense-act system, composed of aerial and ground vehicles. This combination allows performing autonomous tasks in large outdoor areas by integrating both types of platforms in a fully automated manner. Aerial units are used to easily obtain relevant data from the environment and ground units use this information to carry out interventions more efficiently. This paper describes the platforms and sensors required by this multi-robot sense-act system as well as proposes a software system to automatically handle the workflow for any generic environmental task. The proposed system has proved to be suitable to reduce the amount of herbicide applied in agricultural treatments. Although herbicides are very polluting, they are massively deployed on complete agricultural fields to remove weeds. Nevertheless, the amount of herbicide required for treatment is radically reduced when it is accurately applied on patches by the proposed multi-robot system. Thus, the aerial units were employed to scout the crop and build an accurate weed distribution map which was subsequently used to plan the task of the ground units. The whole workflow was executed in a fully autonomous way, without human intervention except when required by Spanish law due to safety reasons. PMID:27517934

  7. Mass storage technology in networks

    NASA Astrophysics Data System (ADS)

    Ishii, Katsunori; Takeda, Toru; Itao, Kiyoshi; Kaneko, Reizo

    1990-08-01

    Trends and features of mass storage subsystems in network are surveyed and their key technologies spotlighted. Storage subsystems are becoming increasingly important in new network systems in which communications and data processing are systematically combined. These systems require a new class of high-performance mass-information storage in order to effectively utilize their processing power. The requirements of high transfer rates, high transactional rates and large storage capacities, coupled with high functionality, fault tolerance and flexibility in configuration, are major challenges in storage subsystems. Recent progress in optical disk technology has resulted in improved performance of on-line external memories to optical disk drives, which are competing with mid-range magnetic disks. Optical disks are more effective than magnetic disks in using low-traffic random-access file storing multimedia data that requires large capacity, such as in archive use and in information distribution use by ROM disks. Finally, it demonstrates image coded document file servers for local area network use that employ 130mm rewritable magneto-optical disk subsystems.

  8. Requirements for AMLCDs in U.S. military applications

    NASA Astrophysics Data System (ADS)

    Hopper, Darrel G.; Desjardins, Daniel D.

    1995-06-01

    Flat panel displays are fast becoming a significant source of more defense for less money. Military instruments have begun to use color active matrix liquid crystal displays (AMLCDs). This is the beginning of a significant transition from electromechanical, CRT. dichroic LCD, and electroluminescent display designs to the AMLCD designs. We have the opportunity with this new technology to establish common products capable of meeting user requirements for sunlight-readable, color and grayscale capable, high-sharpness high-pixel count, flat panel displays for military applications. The Wright Laboratory is leading the development of recommended best practice, draft guidance standard, and performance specifications for this new generation, the flat panel cockpit display generation, of display modules based on requirements for U.S. military aircraft and ground combat human system interfaces. These requirements are similar in many regards to those in both the civil aviation and automotive industries; accordingly, commonality with these civil applications is incorporated where possible, against the requirements for military combat applications. The performance requirement may be achieved by two approaches: militarization of displays made to low requirements of a large volume civil products manufacturer like Sharp or integration of displays made to high requirements by a niche market commercial vendor, like Optical Imaging Systems, Litton Systems Limited, ImageQuest Inc., and Planar Advanced Inc. teamed with Xerox PARC and Standish Industries. [Note that the niche market companies listed are commercial off-the shelf vendors, albeit for high requirement low volume customers.] Given that the performance specifications can be met for a particular military product by either approach, the choice is based on life cycle cost and a thin analysis based on initial costs alone is not acceptable as it ignores the fact that military product life cycles and procurements are 20-60 years compared to 1.5 years for civil products. Thus far there is no convincing evidence that the large volume commercial product approach for combat systems will meet the combat performance specification or be cheaper from a life cycle cost perspective. National and economic security requirements require some military/avionic-grade AMLCD production domestically (i.e. in the U.S. and/or Canada). Examples of AMLCD demand and performance requirements in U.S. military systems are provided.

  9. Development of Traction Drive Motors for the Toyota Hybrid System

    NASA Astrophysics Data System (ADS)

    Kamiya, Munehiro

    Toyota Motor Corporation developed in 2005 a new hybrid system for a large SUV. This system included the new development of a high-speed traction drive motor achieving a significant increase in power weight ratio. This paper provides an overview of the hybrid system, discusses the characteristics required of a traction drive motor, and presents the technologies employed in the developed motor.

  10. Building Safer Systems With SpecTRM

    NASA Technical Reports Server (NTRS)

    2003-01-01

    System safety, an integral component in software development, often poses a challenge to engineers designing computer-based systems. While the relaxed constraints on software design allow for increased power and flexibility, this flexibility introduces more possibilities for error. As a result, system engineers must identify the design constraints necessary to maintain safety and ensure that the system and software design enforces them. Safeware Engineering Corporation, of Seattle, Washington, provides the information, tools, and techniques to accomplish this task with its Specification Tools and Requirements Methodology (SpecTRM). NASA assisted in developing this engineering toolset by awarding the company several Small Business Innovation Research (SBIR) contracts with Ames Research Center and Langley Research Center. The technology benefits NASA through its applications for Space Station rendezvous and docking. SpecTRM aids system and software engineers in developing specifications for large, complex safety critical systems. The product enables engineers to find errors early in development so that they can be fixed with the lowest cost and impact on the system design. SpecTRM traces both the requirements and design rationale (including safety constraints) throughout the system design and documentation, allowing engineers to build required system properties into the design from the beginning, rather than emphasizing assessment at the end of the development process when changes are limited and costly.System safety, an integral component in software development, often poses a challenge to engineers designing computer-based systems. While the relaxed constraints on software design allow for increased power and flexibility, this flexibility introduces more possibilities for error. As a result, system engineers must identify the design constraints necessary to maintain safety and ensure that the system and software design enforces them. Safeware Engineering Corporation, of Seattle, Washington, provides the information, tools, and techniques to accomplish this task with its Specification Tools and Requirements Methodology (SpecTRM). NASA assisted in developing this engineering toolset by awarding the company several Small Business Innovation Research (SBIR) contracts with Ames Research Center and Langley Research Center. The technology benefits NASA through its applications for Space Station rendezvous and docking. SpecTRM aids system and software engineers in developing specifications for large, complex safety critical systems. The product enables engineers to find errors early in development so that they can be fixed with the lowest cost and impact on the system design. SpecTRM traces both the requirements and design rationale (including safety constraints) throughout the system design and documentation, allowing engineers to build required system properties into the design from the beginning, rather than emphasizing assessment at the end of the development process when changes are limited and costly.

  11. Position measurement of the direct drive motor of Large Aperture Telescope

    NASA Astrophysics Data System (ADS)

    Li, Ying; Wang, Daxing

    2010-07-01

    Along with the development of space and astronomy science, production of large aperture telescope and super large aperture telescope will definitely become the trend. It's one of methods to solve precise drive of large aperture telescope using direct drive technology unified designed of electricity and magnetism structure. A direct drive precise rotary table with diameter of 2.5 meters researched and produced by us is a typical mechanical & electrical integration design. This paper mainly introduces position measurement control system of direct drive motor. In design of this motor, position measurement control system requires having high resolution, and precisely aligning the position of rotor shaft and making measurement, meanwhile transferring position information to position reversing information corresponding to needed motor pole number. This system has chosen high precision metal band coder and absolute type coder, processing information of coders, and has sent 32-bit RISC CPU making software processing, and gained high resolution composite coder. The paper gives relevant laboratory test results at the end, indicating the position measurement can apply to large aperture telescope control system. This project is subsidized by Chinese National Natural Science Funds (10833004).

  12. An intermediate level of abstraction for computational systems chemistry.

    PubMed

    Andersen, Jakob L; Flamm, Christoph; Merkle, Daniel; Stadler, Peter F

    2017-12-28

    Computational techniques are required for narrowing down the vast space of possibilities to plausible prebiotic scenarios, because precise information on the molecular composition, the dominant reaction chemistry and the conditions for that era are scarce. The exploration of large chemical reaction networks is a central aspect in this endeavour. While quantum chemical methods can accurately predict the structures and reactivities of small molecules, they are not efficient enough to cope with large-scale reaction systems. The formalization of chemical reactions as graph grammars provides a generative system, well grounded in category theory, at the right level of abstraction for the analysis of large and complex reaction networks. An extension of the basic formalism into the realm of integer hyperflows allows for the identification of complex reaction patterns, such as autocatalysis, in large reaction networks using optimization techniques.This article is part of the themed issue 'Reconceptualizing the origins of life'. © 2017 The Author(s).

  13. A comparison between IMSC, PI and MIMSC methods in controlling the vibration of flexible systems

    NASA Technical Reports Server (NTRS)

    Baz, A.; Poh, S.

    1987-01-01

    A comparative study is presented between three active control algorithms which have proven to be successful in controlling the vibrations of large flexible systems. These algorithms are: the Independent Modal Space Control (IMSC), the Pseudo-inverse (PI), and the Modified Independent Modal Space Control (MIMSC). Emphasis is placed on demonstrating the effectiveness of the MIMSC method in controlling the vibration of large systems with small number of actuators by using an efficient time sharing strategy. Such a strategy favors the MIMSC over the IMSC method, which requires a large number of actuators to control equal number of modes, and also over the PI method which attempts to control large number of modes with smaller number of actuators through the use of an in-exact statistical realization of a modal controller. Numerical examples are presented to illustrate the main features of the three algorithms and the merits of the MIMSC method.

  14. Universal computer test stand (recommended computer test requirements). [for space shuttle computer evaluation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Techniques are considered which would be used to characterize areospace computers with the space shuttle application as end usage. The system level digital problems which have been encountered and documented are surveyed. From the large cross section of tests, an optimum set is recommended that has a high probability of discovering documented system level digital problems within laboratory environments. Defined is a baseline hardware, software system which is required as a laboratory tool to test aerospace computers. Hardware and software baselines and additions necessary to interface the UTE to aerospace computers for test purposes are outlined.

  15. High-efficiency high-reliability optical components for a large, high-average-power visible laser system

    NASA Astrophysics Data System (ADS)

    Taylor, John R.; Stolz, Christopher J.

    1993-08-01

    Laser system performance and reliability depends on the related performance and reliability of the optical components which define the cavity and transport subsystems. High-average-power and long transport lengths impose specific requirements on component performance. The complexity of the manufacturing process for optical components requires a high degree of process control and verification. Qualification has proven effective in ensuring confidence in the procurement process for these optical components. Issues related to component reliability have been studied and provide useful information to better understand the long term performance and reliability of the laser system.

  16. High-efficiency high-reliability optical components for a large, high-average-power visible laser system

    NASA Astrophysics Data System (ADS)

    Taylor, J. R.; Stolz, C. J.

    1992-12-01

    Laser system performance and reliability depends on the related performance and reliability of the optical components which define the cavity and transport subsystems. High-average-power and long transport lengths impose specific requirements on component performance. The complexity of the manufacturing process for optical components requires a high degree of process control and verification. Qualification has proven effective in ensuring confidence in the procurement process for these optical components. Issues related to component reliability have been studied and provide useful information to better understand the long term performance and reliability of the laser system.

  17. Trace Gas Analyzer (TGA) program

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The design, fabrication, and test of a breadboard trace gas analyzer (TGA) is documented. The TGA is a gas chromatograph/mass spectrometer system. The gas chromatograph subsystem employs a recirculating hydrogen carrier gas. The recirculation feature minimizes the requirement for transport and storage of large volumes of carrier gas during a mission. The silver-palladium hydrogen separator which permits the removal of the carrier gas and its reuse also decreases vacuum requirements for the mass spectrometer since the mass spectrometer vacuum system need handle only the very low sample pressure, not sample plus carrier. System performance was evaluated with a representative group of compounds.

  18. Determination of $sup 241$Am in soil using an automated nuclear radiation measurement laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engstrom, D.E.; White, M.G.; Dunaway, P.B.

    The recent completion of REECo's Automated Laboratory and associated software systems has provided a significant increase in capability while reducing manpower requirements. The system is designed to perform gamma spectrum analyses on the large numbers of samples required by the current Nevada Applied Ecology Group (NAEG) and Plutonium Distribution Inventory Program (PDIP) soil sampling programs while maintaining sufficient sensitivities as defined by earlier investigations of the same type. The hardware and systems are generally described in this paper, with emphasis being placed on spectrum reduction and the calibration procedures used for soil samples. (auth)

  19. Processes to improve energy efficiency during pumping and aeration of recirculating water in circular tank systems

    USDA-ARS?s Scientific Manuscript database

    Conventional gas transfer technologies for aquaculture systems occupy a large amount of space, require considerable capital investment, and can contribute to high electricity demand. In addition, diffused aeration in a circular tank can interfere with the hydrodynamics of water rotation and the spee...

  20. Teaching High-Accuracy Global Positioning System to Undergraduates Using Online Processing Services

    ERIC Educational Resources Information Center

    Wang, Guoquan

    2013-01-01

    High-accuracy Global Positioning System (GPS) has become an important geoscientific tool used to measure ground motions associated with plate movements, glacial movements, volcanoes, active faults, landslides, subsidence, slow earthquake events, as well as large earthquakes. Complex calculations are required in order to achieve high-precision…

  1. APPLYING OPERATIONAL ANALYSIS TO URBAN EDUCATIONAL SYSTEMS, A WORKING PAPER.

    ERIC Educational Resources Information Center

    SISSON, ROGER L.

    OPERATIONS RESEARCH CONCEPTS ARE POTENTIALLY USEFUL FOR STUDY OF SUCH LARGE URBAN SCHOOL DISTRICT PROBLEMS AS INFORMATION FLOW, PHYSICAL STRUCTURE OF THE DISTRICT, ADMINISTRATIVE DECISION MAKING BOARD POLICY FUNCTIONS, AND THE BUDGET STRUCTURE. OPERATIONAL ANALYSIS REQUIRES (1) IDENTIFICATION OF THE SYSTEM UNDER STUDY, (2) IDENTIFICATION OF…

  2. 40 CFR 141.87 - Monitoring requirements for water quality parameters.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... (c) Monitoring after installation of corrosion control. Any large system which installs optimal corrosion control treatment pursuant to § 141.81(d)(4) shall measure the water quality parameters at the...)(i). Any small or medium-size system which installs optimal corrosion control treatment shall conduct...

  3. 40 CFR 141.87 - Monitoring requirements for water quality parameters.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... (c) Monitoring after installation of corrosion control. Any large system which installs optimal corrosion control treatment pursuant to § 141.81(d)(4) shall measure the water quality parameters at the...)(i). Any small or medium-size system which installs optimal corrosion control treatment shall conduct...

  4. 40 CFR 141.87 - Monitoring requirements for water quality parameters.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... (c) Monitoring after installation of corrosion control. Any large system which installs optimal corrosion control treatment pursuant to § 141.81(d)(4) shall measure the water quality parameters at the...)(i). Any small or medium-size system which installs optimal corrosion control treatment shall conduct...

  5. 40 CFR 141.87 - Monitoring requirements for water quality parameters.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... (c) Monitoring after installation of corrosion control. Any large system which installs optimal corrosion control treatment pursuant to § 141.81(d)(4) shall measure the water quality parameters at the...)(i). Any small or medium-size system which installs optimal corrosion control treatment shall conduct...

  6. 40 CFR 141.87 - Monitoring requirements for water quality parameters.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... (c) Monitoring after installation of corrosion control. Any large system which installs optimal corrosion control treatment pursuant to § 141.81(d)(4) shall measure the water quality parameters at the...)(i). Any small or medium-size system which installs optimal corrosion control treatment shall conduct...

  7. Risk-Based Neuro-Grid Architecture for Multimodal Biometrics

    NASA Astrophysics Data System (ADS)

    Venkataraman, Sitalakshmi; Kulkarni, Siddhivinayak

    Recent research indicates that multimodal biometrics is the way forward for a highly reliable adoption of biometric identification systems in various applications, such as banks, businesses, government and even home environments. However, such systems would require large distributed datasets with multiple computational realms spanning organisational boundaries and individual privacies.

  8. Panoramic, large-screen, 3-D flight display system design

    NASA Technical Reports Server (NTRS)

    Franklin, Henry; Larson, Brent; Johnson, Michael; Droessler, Justin; Reinhart, William F.

    1995-01-01

    The report documents and summarizes the results of the required evaluations specified in the SOW and the design specifications for the selected display system hardware. Also included are the proposed development plan and schedule as well as the estimated rough order of magnitude (ROM) cost to design, fabricate, and demonstrate a flyable prototype research flight display system. The thrust of the effort was development of a complete understanding of the user/system requirements for a panoramic, collimated, 3-D flyable avionic display system and the translation of the requirements into an acceptable system design for fabrication and demonstration of a prototype display in the early 1997 time frame. Eleven display system design concepts were presented to NASA LaRC during the program, one of which was down-selected to a preferred display system concept. A set of preliminary display requirements was formulated. The state of the art in image source technology, 3-D methods, collimation methods, and interaction methods for a panoramic, 3-D flight display system were reviewed in depth and evaluated. Display technology improvements and risk reductions associated with maturity of the technologies for the preferred display system design concept were identified.

  9. Micrometeoroid and Lunar Secondary Ejecta Flux Measurements: Comparison of Three Acoustic Systems

    NASA Technical Reports Server (NTRS)

    Corsaro, R. D.; Giovane, F.; Liou, Jer-Chyi; Burtchell, M.; Pisacane, V.; Lagakos, N.; Williams, E.; Stansbery, E.

    2010-01-01

    This report examines the inherent capability of three large-area acoustic sensor systems and their applicability for micrometeoroids (MM) and lunar secondary ejecta (SE) detection and characterization for future lunar exploration activities. Discussion is limited to instruments that can be fabricated and deployed with low resource requirements. Previously deployed impact detection probes typically have instrumented capture areas less than 0.2 square meters. Since the particle flux decreases rapidly with increased particle size, such small-area sensors rarely encounter particles in the size range above 50 microns, and even their sampling the population above 10 microns is typically limited. Characterizing the sparse dust population in the size range above 50 microns requires a very large-area capture instrument. However it is also important that such an instrument simultaneously measures the population of the smaller particles, so as to provide a complete instantaneous snapshot of the population. For lunar or planetary surface studies, the system constraints are significant. The instrument must be as large as possible to sample the population of the largest MM. This is needed to reliably assess the particle impact risks and to develop cost-effective shielding designs for habitats, astronauts, and critical instrument. The instrument should also have very high sensitivity to measure the flux of small and slow SE particles. is the SE environment is currently poorly characterized, and possess a contamination risk to machinery and personnel involved in exploration. Deployment also requires that the instrument add very little additional mass to the spacecraft. Three acoustic systems are being explored for this application.

  10. Requirements for Pseudomonas aeruginosa Type I-F CRISPR-Cas Adaptation Determined Using a Biofilm Enrichment Assay.

    PubMed

    Heussler, Gary E; Miller, Jon L; Price, Courtney E; Collins, Alan J; O'Toole, George A

    2016-11-15

    CRISPR (clustered regularly interspaced short palindromic repeat)-Cas (CRISPR-associated protein) systems are diverse and found in many archaea and bacteria. These systems have mainly been characterized as adaptive immune systems able to protect against invading mobile genetic elements, including viruses. The first step in this protection is acquisition of spacer sequences from the invader DNA and incorporation of those sequences into the CRISPR array, termed CRISPR adaptation. Progress in understanding the mechanisms and requirements of CRISPR adaptation has largely been accomplished using overexpression of cas genes or plasmid loss assays; little work has focused on endogenous CRISPR-acquired immunity from viral predation. Here, we developed a new biofilm-based assay system to enrich for Pseudomonas aeruginosa strains with new spacer acquisition. We used this assay to demonstrate that P. aeruginosa rapidly acquires spacers protective against DMS3vir, an engineered lytic variant of the Mu-like bacteriophage DMS3, through primed CRISPR adaptation from spacers present in the native CRISPR2 array. We found that for the P. aeruginosa type I-F system, the cas1 gene is required for CRISPR adaptation, recG contributes to (but is not required for) primed CRISPR adaptation, recD is dispensable for primed CRISPR adaptation, and finally, the ability of a putative priming spacer to prime can vary considerably depending on the specific sequences of the spacer. Our understanding of CRISPR adaptation has expanded largely through experiments in type I CRISPR systems using plasmid loss assays, mutants of Escherichia coli, or cas1-cas2 overexpression systems, but there has been little focus on studying the adaptation of endogenous systems protecting against a lytic bacteriophage. Here we describe a biofilm system that allows P. aeruginosa to rapidly gain spacers protective against a lytic bacteriophage. This approach has allowed us to probe the requirements for CRISPR adaptation in the endogenous type I-F system of P. aeruginosa Our data suggest that CRISPR-acquired immunity in a biofilm may be one reason that many P. aeruginosa strains maintain a CRISPR-Cas system. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  11. Solar array flight dynamic experiment

    NASA Technical Reports Server (NTRS)

    Schock, R. W.

    1986-01-01

    The purpose of the Solar Array Flight Dynamic Experiment (SAFDE) is to demonstrate the feasibility of on-orbit measurement and ground processing of large space structures dynamic characteristics. Test definition or verification provides the dynamic characteristic accuracy required for control systems use. An illumination/measurement system was developed to fly on space shuttle flight STS-31D. The system was designed to dynamically evaluate a large solar array called the Solar Array Flight Experiment (SAFE) that had been scheduled for this flight. The SAFDE system consisted of a set of laser diode illuminators, retroreflective targets, an intelligent star tracker receiver and the associated equipment to power, condition, and record the results. In six tests on STS-41D, data was successfully acquired from 18 retroreflector targets and ground processed, post flight, to define the solar array's dynamic characteristic. The flight experiment proved the viability of on-orbit test definition of large space structures dynamic characteristics. Future large space structures controllability should be greatly enhanced by this capability.

  12. Solar array flight dynamic experiment

    NASA Technical Reports Server (NTRS)

    Schock, Richard W.

    1986-01-01

    The purpose of the Solar Array Flight Dynamic Experiment (SAFDE) is to demonstrate the feasibility of on-orbit measurement and ground processing of large space structures dynamic characteristics. Test definition or verification provides the dynamic characteristic accuracy required for control systems use. An illumination/measurement system was developed to fly on Space Shuttle flight STS-31D. The system was designed to dynamically evaluate a large solar array called the Solar Array Flight Experiment (SAFE) that had been scheduled for this flight. The SAFDE system consisted of a set of laser diode illuminators, retroreflective targets, an intelligent star tracker receiver and the associated equipment to power, condition, and record the results. In six tests on STS-41D, data was successfully acquired from 18 retroreflector targets and ground processed, post flight, to define the solar array's dynamic characteristic. The flight experiment proved the viability of on-orbit test definition of large space structures dynamic characteristics. Future large space structures controllability should be greatly enhanced by this capability.

  13. Solar array flight dynamic experiment

    NASA Technical Reports Server (NTRS)

    Schock, Richard W.

    1987-01-01

    The purpose of the Solar Array Flight Dynamic Experiment (SAFDE) is to demonstrate the feasibility of on-orbit measurement and ground processing of large space structures' dynamic characteristics. Test definition or verification provides the dynamic characteristic accuracy required for control systems use. An illumination/measurement system was developed to fly on space shuttle flight STS-41D. The system was designed to dynamically evaluate a large solar array called the Solar Array Flight Experiment (SAFE) that had been scheduled for this flight. The SAFDE system consisted of a set of laser diode illuminators, retroreflective targets, an intelligent star tracker receiver and the associated equipment to power, condition, and record the results. In six tests on STS-41D, data was successfully acquired from 18 retroreflector targets and ground processed, post flight, to define the solar array's dynamic characteristic. The flight experiment proved the viability of on-orbit test definition of large space structures dynamic characteristics. Future large space structures controllability should be greatly enhanced by this capability.

  14. Large-Area Chemical and Biological Decontamination Using a High Energy Arc Lamp (HEAL) System.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duty, Chad E; Smith, Rob R; Vass, Arpad Alexander

    2008-01-01

    Methods for quickly decontaminating large areas exposed to chemical and biological (CB) warfare agents can present significant logistical, manpower, and waste management challenges. Oak Ridge National Laboratory (ORNL) is pursuing an alternate method to decompose CB agents without the use of toxic chemicals or other potentially harmful substances. This process uses a high energy arc lamp (HEAL) system to photochemically decompose CB agents over large areas (12 m2). Preliminary tests indicate that more than 5 decades (99.999%) of an Anthrax spore simulant (Bacillus globigii) were killed in less than 7 seconds of exposure to the HEAL system. When combined withmore » a catalyst material (TiO2) the HEAL system was also effective against a chemical agent simulant, diisopropyl methyl phosphonate (DIMP). These results demonstrate the feasibility of a rapid, large-area chemical and biological decontamination method that does not require toxic or corrosive reagents or generate hazardous wastes.« less

  15. Initial Study of an Effective Fast-Time Simulation Platform for Unmanned Aircraft System Traffic Management

    NASA Technical Reports Server (NTRS)

    Xue, Min; Rios, Joseph

    2017-01-01

    Small Unmanned Aerial Vehicles (sUAVs), typically 55 lbs and below, are envisioned to play a major role in surveilling critical assets, collecting important information, and delivering goods. Large scale small UAV operations are expected to happen in low altitude airspace in the near future. Many static and dynamic constraints exist in low altitude airspace because of manned aircraft or helicopter activities, various wind conditions, restricted airspace, terrain and man-made buildings, and conflict-avoidance among sUAVs. High sensitivity and high maneuverability are unique characteristics of sUAVs that bring challenges to effective system evaluations and mandate such a simulation platform different from existing simulations that were built for manned air traffic system and large unmanned fixed aircraft. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative focuses on enabling safe and efficient sUAV operations in the future. In order to help define requirements and policies for a safe and efficient UTM system to accommodate a large amount of sUAV operations, it is necessary to develop a fast-time simulation platform that can effectively evaluate requirements, policies, and concepts in a close-to-reality environment. This work analyzed the impacts of some key factors including aforementioned sUAV's characteristics and demonstrated the importance of these factors in a successful UTM fast-time simulation platform.

  16. Robopedia: Leveraging Sensorpedia for Web-Enabled Robot Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Resseguie, David R

    There is a growing interest in building Internetscale sensor networks that integrate sensors from around the world into a single unified system. In contrast, robotics application development has primarily focused on building specialized systems. These specialized systems take scalability and reliability into consideration, but generally neglect exploring the key components required to build a large scale system. Integrating robotic applications with Internet-scale sensor networks will unify specialized robotics applications and provide answers to large scale implementation concerns. We focus on utilizing Internet-scale sensor network technology to construct a framework for unifying robotic systems. Our framework web-enables a surveillance robot smore » sensor observations and provides a webinterface to the robot s actuators. This lets robots seamlessly integrate into web applications. In addition, the framework eliminates most prerequisite robotics knowledge, allowing for the creation of general web-based robotics applications. The framework also provides mechanisms to create applications that can interface with any robot. Frameworks such as this one are key to solving large scale mobile robotics implementation problems. We provide an overview of previous Internetscale sensor networks, Sensorpedia (an ad-hoc Internet-scale sensor network), our framework for integrating robots with Sensorpedia, two applications which illustrate our frameworks ability to support general web-based robotic control, and offer experimental results that illustrate our framework s scalability, feasibility, and resource requirements.« less

  17. Initial Study of An Effective Fast-Time Simulation Platform for Unmanned Aircraft System Traffic Management

    NASA Technical Reports Server (NTRS)

    Xue, Min; Rios, Joseph

    2017-01-01

    Small Unmanned Aerial Vehicles (sUAVs), typically 55 lbs and below, are envisioned to play a major role in surveilling critical assets, collecting important information, and delivering goods. Large scale small UAV operations are expected to happen in low altitude airspace in the near future. Many static and dynamic constraints exist in low altitude airspace because of manned aircraft or helicopter activities, various wind conditions, restricted airspace, terrain and man-made buildings, and conflict-avoidance among sUAVs. High sensitivity and high maneuverability are unique characteristics of sUAVs that bring challenges to effective system evaluations and mandate such a simulation platform different from existing simulations that were built for manned air traffic system and large unmanned fixed aircraft. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative focuses on enabling safe and efficient sUAV operations in the future. In order to help define requirements and policies for a safe and efficient UTM system to accommodate a large amount of sUAV operations, it is necessary to develop a fast-time simulation platform that can effectively evaluate requirements, policies, and concepts in a close-to-reality environment. This work analyzed the impacts of some key factors including aforementioned sUAV's characteristics and demonstrated the importance of these factors in a successful UTM fast-time simulation platform.

  18. ATLAS TDAQ System Administration: Master of Puppets

    NASA Astrophysics Data System (ADS)

    Ballestrero, S.; Brasolin, F.; Fazio, D.; Gament, C.; Lee, C. J.; Scannicchio, D. A.; Twomey, M. S.

    2017-10-01

    Within the ATLAS detector, the Trigger and Data Acquisition system is responsible for the online processing of data streamed from the detector during collisions at the Large Hadron Collider at CERN. The online farm is comprised of ∼4000 servers processing the data read out from ∼100 million detector channels through multiple trigger levels. The configurtion of these servers is not an easy task, especially since the detector itself is made up of multiple different sub-detectors, each with their own particular requirements. The previous method of configuring these servers, using Quattor and a hierarchical scripts system was cumbersome and restrictive. A better, unified system was therefore required to simplify the tasks of the TDAQ Systems Administrators, for both the local and net-booted systems, and to be able to fulfil the requirements of TDAQ, Detector Control Systems and the sub-detectors groups. Various configuration management systems were evaluated, though in the end, Puppet was chosen as the application of choice and was the first such implementation at CERN.

  19. Importance sampling large deviations in nonequilibrium steady states. I.

    PubMed

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T

    2018-03-28

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  20. Importance sampling large deviations in nonequilibrium steady states. I

    NASA Astrophysics Data System (ADS)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.

    2018-03-01

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  1. Design, analysis, and control of a large transport aircraft utilizing selective engine thrust as a backup system for the primary flight control. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gerren, Donna S.

    1995-01-01

    A study has been conducted to determine the capability to control a very large transport airplane with engine thrust. This study consisted of the design of an 800-passenger airplane with a range of 5000 nautical miles design and evaluation of a flight control system, and design and piloted simulation evaluation of a thrust-only backup flight control system. Location of the four wing-mounted engines was varied to optimize the propulsive control capability, and the time constant of the engine response was studied. The goal was to provide level 1 flying qualities. The engine location and engine time constant did not have a large effect on the control capability. The airplane design did meet level 1 flying qualities based on frequencies, damping ratios, and time constants in the longitudinal and lateral-directional modes. Project pilots consistently rated the flying qualities as either level 1 or level 2 based on Cooper-Harper ratings. However, because of the limited control forces and moments, the airplane design fell short of meeting the time required to achieve a 30 deg bank and the time required to respond a control input.

  2. Space Solar Power Multi-body Dynamics and Controls, Concepts for the Integrated Symmetrical Concentrator Configuration

    NASA Technical Reports Server (NTRS)

    Glaese, John R.; McDonald, Emmett J.

    2000-01-01

    Orbiting space solar power systems are currently being investigated for possible flight in the time frame of 2015-2020 and later. Such space solar power (SSP) satellites are required to be extremely large in order to make practical the process of collection, conversion to microwave radiation, and reconversion to electrical power at earth stations or at remote locations in space. These large structures are expected to be very flexible presenting unique problems associated with their dynamics and control. The purpose of this project is to apply the expanded TREETOPS multi-body dynamics analysis computer simulation program (with expanded capabilities developed in the previous activity) to investigate the control problems associated with the integrated symmetrical concentrator (ISC) conceptual SSP system. SSP satellites are, as noted, large orbital systems having many bodies (perhaps hundreds) with flexible arrays operating in an orbiting environment where the non-uniform gravitational forces may be the major load producers on the structure so that a high fidelity gravity model is required. The current activity arises from our NRA8-23 SERT proposal. Funding, as a supplemental selection, has been provided by NASA with reduced scope from that originally proposed.

  3. Data storage and retrieval system

    NASA Technical Reports Server (NTRS)

    Nakamoto, Glen

    1991-01-01

    The Data Storage and Retrieval System (DSRS) consists of off-the-shelf system components integrated as a file server supporting very large files. These files are on the order of one gigabyte of data per file, although smaller files on the order of one megabyte can be accommodated as well. For instance, one gigabyte of data occupies approximately six 9 track tape reels (recorded at 6250 bpi). Due to this large volume of media, it was desirable to shrink the size of the proposed media to a single portable cassette. In addition to large size, a key requirement was that the data needs to be transferred to a (VME based) workstation at very high data rates. One gigabyte (GB) of data needed to be transferred from an archiveable media on a file server to a workstation in less than 5 minutes. Equivalent size, on-line data needed to be transferred in less than 3 minutes. These requirements imply effective transfer rates on the order of four to eight megabytes per second (4-8 MB/s). The DSRS also needed to be able to send and receive data from a variety of other sources accessible from an Ethernet local area network.

  4. Data storage and retrieval system

    NASA Technical Reports Server (NTRS)

    Nakamoto, Glen

    1992-01-01

    The Data Storage and Retrieval System (DSRS) consists of off-the-shelf system components integrated as a file server supporting very large files. These files are on the order of one gigabyte of data per file, although smaller files on the order of one megabyte can be accommodated as well. For instance, one gigabyte of data occupies approximately six 9-track tape reels (recorded at 6250 bpi). Due to this large volume of media, it was desirable to 'shrink' the size of the proposed media to a single portable cassette. In addition to large size, a key requirement was that the data needs to be transferred to a (VME based) workstation at very high data rates. One gigabyte (GB) of data needed to be transferred from an archiveable media on a file server to a workstation in less than 5 minutes. Equivalent size, on-line data needed to be transferred in less than 3 minutes. These requirements imply effective transfer rates on the order of four to eight megabytes per second (4-8 MB/s). The DSRS also needed to be able to send and receive data from a variety of other sources accessible from an Ethernet local area network.

  5. Advanced-technology space station study: Summary of systems and pacing technologies

    NASA Technical Reports Server (NTRS)

    Butterfield, A. J.; Garn, P. A.; King, C. B.; Queijo, M. J.

    1990-01-01

    The principal system features defined for the Advanced Technology Space Station are summarized and the 21 pacing technologies identified during the course of the study are described. The descriptions of system configurations were extracted from four previous study reports. The technological areas focus on those systems particular to all large spacecraft which generate artificial gravity by rotation. The summary includes a listing of the functions, crew requirements and electrical power demand that led to the studied configuration. The pacing technologies include the benefits of advanced materials, in-orbit assembly requirements, stationkeeping, evaluations of electrical power generation alternates, and life support systems. The descriptions of systems show the potential for synergies and identifies the beneficial interactions that can result from technological advances.

  6. Recommendations for the design and the installation of large laser scanning microscopy systems

    NASA Astrophysics Data System (ADS)

    Helm, P. Johannes

    2012-03-01

    Laser Scanning Microscopy (LSM) has since the inventions of the Confocal Scanning Laser Microscope (CLSM) and the Multi Photon Laser Scanning Microscope (MPLSM) developed into an essential tool in contemporary life science and material science. The market provides an increasing number of turn-key and hands-off commercial LSM systems, un-problematic to purchase, set up and integrate even into minor research groups. However, the successful definition, financing, acquisition, installation and effective use of one or more large laser scanning microscopy systems, possibly of core facility character, often requires major efforts by senior staff members of large academic or industrial units. Here, a set of recommendations is presented, which are helpful during the process of establishing large systems for confocal or non-linear laser scanning microscopy as an effective operational resource in the scientific or industrial production process. Besides the description of technical difficulties and possible pitfalls, the article also illuminates some seemingly "less scientific" processes, i.e. the definition of specific laboratory demands, advertisement of the intention to purchase one or more large systems, evaluation of quotations, establishment of contracts and preparation of the local environment and laboratory infrastructure.

  7. Micro-optical-mechanical system photoacoustic spectrometer

    DOEpatents

    Kotovsky, Jack; Benett, William J.; Tooker, Angela C.; Alameda, Jennifer B.

    2013-01-01

    All-optical photoacoustic spectrometer sensing systems (PASS system) and methods include all the hardware needed to analyze the presence of a large variety of materials (solid, liquid and gas). Some of the all-optical PASS systems require only two optical-fibers to communicate with the opto-electronic power and readout systems that exist outside of the material environment. Methods for improving the signal-to-noise are provided and enable mirco-scale systems and methods for operating such systems.

  8. Systems Engineering in NASA's R&TD Programs

    NASA Technical Reports Server (NTRS)

    Jones, Harry

    2005-01-01

    Systems engineering is largely the analysis and planning that support the design, development, and operation of systems. The most common application of systems engineering is in guiding systems development projects that use a phased process of requirements, specifications, design, and development. This paper investigates how systems engineering techniques should be applied in research and technology development programs for advanced space systems. These programs should include anticipatory engineering of future space flight systems and a project portfolio selection process, as well as systems engineering for multiple development projects.

  9. LOX/hydrocarbon auxiliary propulsion system study

    NASA Technical Reports Server (NTRS)

    Orton, G. F.; Mark, T. D.; Weber, D. D.

    1982-01-01

    Liquid oxygen/hydrocarbon propulsion systems applicable to a second generation orbiter OMS/RCS were compared, and major system/component options were evaluated. A large number of propellant combinations and system concepts were evaluated. The ground rules were defined in terms of candidate propellants, system/component design options, and design requirements. System and engine component math models were incorporated into existing computer codes for system evaluations. The detailed system evaluations and comparisons were performed to identify the recommended propellant combination and system approach.

  10. Multifunctions - liquid crystal displays

    NASA Astrophysics Data System (ADS)

    Bechteler, M.

    1980-12-01

    Large area liquid crystal displays up to 400 cm square were developed capable of displaying a large quantity of analog and digital information, such as required for car dashboards, communication systems, and data processing, while fulfilling the attendant requirements on view tilt angle and operating temperature range. Items incorporated were: low resistance conductive layers deposited by means of a sputtermachine, preshaped glasses and broken glassfibers, assuring perfect parallellism between glass plates, rubbed plastic layers for excellent electrooptical properties, and fluorescent plates for display illumination in bright sunlight as well as in dim light conditions. Prototypes are described for clock and automotive applications.

  11. E-ELT requirements management

    NASA Astrophysics Data System (ADS)

    Schneller, D.

    2014-08-01

    The E-ELT has completed its design phase and is now entering construction. ESO is acting as prime contractor and usually procures subsystems, including their design, from industry. This, in turn, leads to a large number of requirements, whose validity, consistency and conformity with user needs requires extensive management. Therefore E-ELT Systems Engineering has chosen to follow a systematic approach, based on a reasoned requirement architecture that follows the product breakdown structure of the observatory. The challenge ahead is the controlled flow-down of science user needs into engineering requirements, requirement specifications and system design documents. This paper shows how the E-ELT project manages this. The project has adopted IBM DOORTM as a supporting requirements management tool. This paper deals with emerging problems and pictures potential solutions. It shows trade-offs made to reach a proper balance between the effort put in this activity and potential overheads, and the benefit for the project.

  12. Material requirements for the High Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Stephens, Joseph R.; Hecht, Ralph J.; Johnson, Andrew M.

    1993-01-01

    Under NASA-sponsored High Speed Research (HSR) programs, the materials and processing requirements have been identified for overcoming the environmental and economic barriers of the next generation High Speed Civil Transport (HSCT) propulsion system. The long (2 to 5 hours) supersonic cruise portion of the HSCT cycle will place additional durability requirements on all hot section engine components. Low emissions combustor designs will require high temperature ceramic matrix composite liners to meet an emission goal of less than 5g NO(x) per Kg fuel burned. Large axisymmetric and two-dimensional exhaust nozzle designs are now under development to meet or exceed FAR 36 Stage III noise requirements, and will require lightweight, high temperature metallic, intermetallic, and ceramic matrix composites to reduce nozzle weight and meet structural and acoustic component performance goals. This paper describes and discusses the turbomachinery, combustor, and exhaust nozzle requirements of the High Speed Civil Transport propulsion system.

  13. Improved Controls for Fusion RF Systems. Final technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casey, Jeffrey A.

    2011-11-08

    We have addressed the specific requirements for the integrated systems controlling an array of klystrons used for Lower Hybrid Current Drive (LHCD). The immediate goal for our design was to modernize the transmitter protection system (TPS) for LHCD on the Alcator C-Mod tokamak at the MIT Plasma Science and Fusion Center (MIT-PSFC). Working with the Alcator C-Mod team, we have upgraded the design of these controls to retrofit for improvements in performance and safety, as well as to facilitate the upcoming expansion from 12 to 16 klystrons. The longer range goals to generalize the designs in such a way thatmore » they will be of benefit to other programs within the international fusion effort was met by designing a system which was flexible enough to address all the MIT system requirements, and modular enough to adapt to a large variety of other requirements with minimal reconfiguration.« less

  14. On the Path to SunShot. Emerging Issues and Challenges in Integrating Solar with the Distribution System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan; Broderick, Robert; Mather, Barry

    2016-05-01

    This report analyzes distribution-integration challenges, solutions, and research needs in the context of distributed generation from PV (DGPV) deployment to date and the much higher levels of deployment expected with achievement of the U.S. Department of Energy's SunShot targets. Recent analyses have improved estimates of the DGPV hosting capacities of distribution systems. This report uses these results to statistically estimate the minimum DGPV hosting capacity for the contiguous United States using traditional inverters of approximately 170 GW without distribution system modifications. This hosting capacity roughly doubles if advanced inverters are used to manage local voltage and additional minor, low-cost changesmore » could further increase these levels substantially. Key to achieving these deployment levels at minimum cost is siting DGPV based on local hosting capacities, suggesting opportunities for regulatory, incentive, and interconnection innovation. Already, pre-computed hosting capacity is beginning to expedite DGPV interconnection requests and installations in select regions; however, realizing SunShot-scale deployment will require further improvements to DGPV interconnection processes, standards and codes, and compensation mechanisms so they embrace the contributions of DGPV to system-wide operations. SunShot-scale DGPV deployment will also require unprecedented coordination of the distribution and transmission systems. This includes harnessing DGPV's ability to relieve congestion and reduce system losses by generating closer to loads; minimizing system operating costs and reserve deployments through improved DGPV visibility; developing communication and control architectures that incorporate DGPV into system operations; providing frequency response, transient stability, and synthesized inertia with DGPV in the event of large-scale system disturbances; and potentially managing reactive power requirements due to large-scale deployment of advanced inverter functions. Finally, additional local and system-level value could be provided by integrating DGPV with energy storage and 'virtual storage,' which exploits improved management of electric vehicle charging, building energy systems, and other large loads. Together, continued innovation across this rich distribution landscape can enable the very-high deployment levels envisioned by SunShot.« less

  15. ART-Ada: An Ada-based expert system tool

    NASA Technical Reports Server (NTRS)

    Lee, S. Daniel; Allen, Bradley P.

    1990-01-01

    The Department of Defense mandate to standardize on Ada as the language for software systems development has resulted in an increased interest in making expert systems technology readily available in Ada environments. NASA's Space Station Freedom is an example of the large Ada software development projects that will require expert systems in the 1990's. Another large scale application that can benefit from Ada based expert system tool technology is the Pilot's Associate (PA) expert system project for military combat aircraft. The Automated Reasoning Tool-Ada (ART-Ada), an Ada expert system tool, is explained. ART-Ada allows applications of a C-based expert system tool called ART-IM to be deployed in various Ada environments. ART-Ada is being used to implement several prototype expert systems for NASA's Space Station Freedom program and the U.S. Air Force.

  16. ART-Ada: An Ada-based expert system tool

    NASA Technical Reports Server (NTRS)

    Lee, S. Daniel; Allen, Bradley P.

    1991-01-01

    The Department of Defense mandate to standardize on Ada as the language for software systems development has resulted in increased interest in making expert systems technology readily available in Ada environments. NASA's Space Station Freedom is an example of the large Ada software development projects that will require expert systems in the 1990's. Another large scale application that can benefit from Ada based expert system tool technology is the Pilot's Associate (PA) expert system project for military combat aircraft. Automated Reasoning Tool (ART) Ada, an Ada Expert system tool is described. ART-Ada allow applications of a C-based expert system tool called ART-IM to be deployed in various Ada environments. ART-Ada is being used to implement several prototype expert systems for NASA's Space Station Freedom Program and the U.S. Air Force.

  17. Low-thrust chemical propulsion system pump technology

    NASA Technical Reports Server (NTRS)

    Sabiers, R. L.; Siebenhaar, A.

    1981-01-01

    Candidate pump and driver systems for low thrust cargo orbit transfer vehicle engines which deliver large space structures to geosynchronous equatorial orbit and beyond are evaluated. The pumps operate to 68 atmospheres (1000 psi) discharge pressure and flowrates suited to cryogenic engines using either LOX/methane or LOX/hydrogen propellants in thrust ranges from 445 to 8900 N (100 to 2000 lb F). Analysis of the various pumps and drivers indicate that the low specific speed requirement will make high fluid efficiencies difficult to achieve. As such, multiple stages are required. In addition, all pumps require inducer stages. The most attractive main pumps are the multistage centrifugal pumps.

  18. Oceanic Transport

    NASA Technical Reports Server (NTRS)

    Chase, R.; Mcgoldrick, L.

    1984-01-01

    The importance of large-scale ocean movements to the moderation of Global Temperature is discussed. The observational requirements of physical oceanography are discussed. Satellite-based oceanographic observing systems are seen as central to oceanography in 1990's.

  19. Control law synthesis and optimization software for large order aeroservoelastic systems

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.; Pototzky, A.; Noll, Thomas

    1989-01-01

    A flexible aircraft or space structure with active control is typically modeled by a large-order state space system of equations in order to accurately represent the rigid and flexible body modes, unsteady aerodynamic forces, actuator dynamics and gust spectra. The control law of this multi-input/multi-output (MIMO) system is expected to satisfy multiple design requirements on the dynamic loads, responses, actuator deflection and rate limitations, as well as maintain certain stability margins, yet should be simple enough to be implemented on an onboard digital microprocessor. A software package for performing an analog or digital control law synthesis for such a system, using optimal control theory and constrained optimization techniques is described.

  20. Utilizing Solar Power Technologies for On-Orbit Propellant Production

    NASA Technical Reports Server (NTRS)

    Fikes, John C.; Howell, Joe T.; Henley, Mark W.

    2006-01-01

    The cost of access to space beyond low Earth orbit may be reduced if vehicles can refuel in orbit. The cost of access to low Earth orbit may also be reduced by launching oxygen and hydrogen propellants in the form of water. To achieve this reduction in costs of access to low Earth orbit and beyond, a propellant depot is considered that electrolyzes water in orbit, then condenses and stores cryogenic oxygen and hydrogen. Power requirements for such a depot require Solar Power Satellite technologies. A propellant depot utilizing solar power technologies is discussed in this paper. The depot will be deployed in a 400 km circular equatorial orbit. It receives tanks of water launched into a lower orbit from Earth, converts the water to liquid hydrogen and oxygen, and stores up to 500 metric tons of cryogenic propellants. This requires a power system that is comparable to a large Solar Power Satellite capable of several 100 kW of energy. Power is supplied by a pair of solar arrays mounted perpendicular to the orbital plane, which rotates once per orbit to track the Sun. The majority of the power is used to run the electrolysis system. Thermal control is maintained by body-mounted radiators; these also provide some shielding against orbital debris. The propellant stored in the depot can support transportation from low Earth orbit to geostationary Earth orbit, the Moon, LaGrange points, Mars, etc. Emphasis is placed on the Water-Ice to Cryogen propellant production facility. A very high power system is required for cracking (electrolyzing) the water and condensing and refrigerating the resulting oxygen and hydrogen. For a propellant production rate of 500 metric tons (1,100,000 pounds) per year, an average electrical power supply of 100 s of kW is required. To make the most efficient use of space solar power, electrolysis is performed only during the portion of the orbit that the Depot is in sunlight, so roughly twice this power level is needed for operations in sunlight (slightly over half of the time). This power level mandates large solar arrays, using advanced Space Solar Power technology. A significant amount of the power has to be dissipated as heat, through large radiators. This paper briefly describes the propellant production facility and the requirements for a high power system capability. The Solar Power technologies required for such an endeavor are discussed.

  1. Impact of Energy Gain and Subsystem Characteristics on Fusion Propulsion Performance Balances

    NASA Technical Reports Server (NTRS)

    Chakrabarti, Suman; Schmidt, George R.

    2000-01-01

    Rapid transportation of large payloads and human crews to destinations throughout the solar system will require propulsion systems having not only very high exhaust velocities (I (sub sp) greater than or equal to 10 (exp 4) to 10 (exp 5) sec) but also extremely low mass-power ratios (alpha less than or equal to 10 (exp -1) kg/kW). Such low a are difficult to achieve with power-limited propulsion systems. but may be attainable with fusion and other high I (sub SP) nuclear concepts that produce energy within the propellant. The magnitude of this energy gain is of fundamental importance. It must be large enough to sustain the nuclear process while still providing a high jet power relative to the massive power-intensive subsystems associated with these types of concepts. This paper evaluates the energy gain and mass-power characteristics required for a consistent with 1-year roundtrip planetary missions ranging up to 100 AU. Central to this analysis is an equation for overall system a, which is derived from the power balance of a generalized "gain-limited" propulsion system. Results show that the gain required to achieve alpha approximately 10 (exp -1) kg/kW with foreseeable subsystem technology can vary from 50 to as high as 10,000, which is 2 to 5 orders of magnitude greater than current state-of-the art. However, order of magnitude improvements in propulsion subsystem mass and efficiency could reduce gain requirements to 10 to 1,000 - still a very challenging goal.

  2. Scaling the Pyramid Model across Complex Systems Providing Early Care for Preschoolers: Exploring How Models for Decision Making May Enhance Implementation Science

    ERIC Educational Resources Information Center

    Johnson, LeAnne D.

    2017-01-01

    Bringing effective practices to scale across large systems requires attending to how information and belief systems come together in decisions to adopt, implement, and sustain those practices. Statewide scaling of the Pyramid Model, a framework for positive behavior intervention and support, across different types of early childhood programs…

  3. Developing custom fire behavior fuel models from ecologically complex fuel structures for upper Atlantic Coastal Plain forests

    Treesearch

    Bernard R. Parresol; Joe H. Scott; Anne Andreu; Susan Prichard; Laurie Kurth

    2012-01-01

    Currently geospatial fire behavior analyses are performed with an array of fire behavior modeling systems such as FARSITE, FlamMap, and the Large Fire Simulation System. These systems currently require standard or customized surface fire behavior fuel models as inputs that are often assigned through remote sensing information. The ability to handle hundreds or...

  4. Space-based solar power conversion and delivery systems study. Volume 2: Engineering analysis of orbital systems

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Program plans, schedules, and costs are determined for a synchronous orbit-based power generation and relay system. Requirements for the satellite solar power station (SSPS) and the power relay satellite (PRS) are explored. Engineering analysis of large solar arrays, flight mechanics and control, transportation, assembly and maintenance, and microwave transmission are included.

  5. Development of deployable structures for large space platform systems, volume 1

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Generic deployable spacecraft configurations and deployable platform systems concepts were identified. Sizing, building block concepts, orbiter packaging, thermal analysis, cost analysis, and mass properties analysis as related to platform systems integration are considered. Technology needs are examined and the major criteria used in concept selection are delineated. Requirements for deployable habitat modules, tunnels, and OTV hangars are considered.

  6. Optimization in the systems engineering process

    NASA Technical Reports Server (NTRS)

    Lemmerman, Loren A.

    1993-01-01

    The essential elements of the design process consist of the mission definition phase that provides the system requirements, the conceptual design, the preliminary design and finally the detailed design. Mission definition is performed largely by operations analysts in conjunction with the customer. The result of their study is handed off to the systems engineers for documentation as the systems requirements. The document that provides these requirements is the basis for the further design work of the design engineers at the Lockheed-Georgia Company. The design phase actually begins with conceptual design, which is generally conducted by a small group of engineers using multidisciplinary design programs. Because of the complexity of the design problem, the analyses are relatively simple and generally dependent on parametric analyses of the configuration. The result of this phase is a baseline configuration from which preliminary design may be initiated.

  7. Thirty Meter Telescope narrow-field infrared adaptive optics system real-time controller prototyping results

    NASA Astrophysics Data System (ADS)

    Smith, Malcolm; Kerley, Dan; Chapin, Edward L.; Dunn, Jennifer; Herriot, Glen; Véran, Jean-Pierre; Boyer, Corinne; Ellerbroek, Brent; Gilles, Luc; Wang, Lianqi

    2016-07-01

    Prototyping and benchmarking was performed for the Real-Time Controller (RTC) of the Narrow Field InfraRed Adaptive Optics System (NFIRAOS). To perform wavefront correction, NFIRAOS utilizes two deformable mirrors (DM) and one tip/tilt stage (TTS). The RTC receives wavefront information from six Laser Guide Star (LGS) Shack- Hartmann WaveFront Sensors (WFS), one high-order Natural Guide Star Pyramid WaveFront Sensor (PWFS) and multiple low-order instrument detectors. The RTC uses this information to determine the commands to send to the wavefront correctors. NFIRAOS is the first light AO system for the Thirty Meter Telescope (TMT). The prototyping was performed using dual-socket high performance Linux servers with the real-time (PREEMPT_RT) patch and demonstrated the viability of a commercial off-the-shelf (COTS) hardware approach to large scale AO reconstruction. In particular, a large custom matrix vector multiplication (MVM) was benchmarked which met the required latency requirements. In addition all major inter-machine communication was verified to be adequate using 10Gb and 40Gb Ethernet. The results of this prototyping has enabled a CPU-based NFIRAOS RTC design to proceed with confidence and that COTS hardware can be used to meet the demanding performance requirements.

  8. Nurse's Aid And Housekeeping Mobile Robot For Use In The Nursing Home Workplace

    NASA Astrophysics Data System (ADS)

    Sines, John A.

    1987-01-01

    The large nursing home market has several natural characteristics which make it a good applications area for robotics. The environment is already robot accessible and the work functions require large quantities of low skilled services on a daily basis. In the near future, a commercial opportunity for the practical application of robots is emerging in the delivery of housekeeping services in the nursing home environment. The robot systems will assist in food tray delivery, material handling, and security, and will perform activities such as changing a resident's table side drinking water twice a day, and taking out the trash. The housekeeping work functions will generate cost savings of approximately 22,000 per year, at a cost of 6,000 per year. Technical system challenges center around the artificial intelligence required for the robot to map its own location within the facility, to find objects, and to avoid obstacles, and the development of an energy efficient mechanical lifting system. The long engineering and licensing cycles (7 to 12 years) required to bring this type of product to market make it difficult to raise capital for such a venture.

  9. Durham extremely large telescope adaptive optics simulation platform.

    PubMed

    Basden, Alastair; Butterley, Timothy; Myers, Richard; Wilson, Richard

    2007-03-01

    Adaptive optics systems are essential on all large telescopes for which image quality is important. These are complex systems with many design parameters requiring optimization before good performance can be achieved. The simulation of adaptive optics systems is therefore necessary to categorize the expected performance. We describe an adaptive optics simulation platform, developed at Durham University, which can be used to simulate adaptive optics systems on the largest proposed future extremely large telescopes as well as on current systems. This platform is modular, object oriented, and has the benefit of hardware application acceleration that can be used to improve the simulation performance, essential for ensuring that the run time of a given simulation is acceptable. The simulation platform described here can be highly parallelized using parallelization techniques suited for adaptive optics simulation, while still offering the user complete control while the simulation is running. The results from the simulation of a ground layer adaptive optics system are provided as an example to demonstrate the flexibility of this simulation platform.

  10. Development and Application of a Structural Health Monitoring System Based on Wireless Smart Aggregates

    PubMed Central

    Ma, Haoyan; Li, Peng; Song, Gangbing; Wu, Jianxin

    2017-01-01

    Structural health monitoring (SHM) systems can improve the safety and reliability of structures, reduce maintenance costs, and extend service life. Research on concrete SHMs using piezoelectric-based smart aggregates have reached great achievements. However, the newly developed techniques have not been widely applied in practical engineering, largely due to the wiring problems associated with large-scale structural health monitoring. The cumbersome wiring requires much material and labor work, and more importantly, the associated maintenance work is also very heavy. Targeting a practical large scale concrete crack detection (CCD) application, a smart aggregates-based wireless sensor network system is proposed for the CCD application. The developed CCD system uses Zigbee 802.15.4 protocols, and is able to perform dynamic stress monitoring, structural impact capturing, and internal crack detection. The system has been experimentally validated, and the experimental results demonstrated the effectiveness of the proposed system. This work provides important support for practical CCD applications using wireless smart aggregates. PMID:28714927

  11. Development and Application of a Structural Health Monitoring System Based on Wireless Smart Aggregates.

    PubMed

    Yan, Shi; Ma, Haoyan; Li, Peng; Song, Gangbing; Wu, Jianxin

    2017-07-17

    Structural health monitoring (SHM) systems can improve the safety and reliability of structures, reduce maintenance costs, and extend service life. Research on concrete SHMs using piezoelectric-based smart aggregates have reached great achievements. However, the newly developed techniques have not been widely applied in practical engineering, largely due to the wiring problems associated with large-scale structural health monitoring. The cumbersome wiring requires much material and labor work, and more importantly, the associated maintenance work is also very heavy. Targeting a practical large scale concrete crack detection (CCD) application, a smart aggregates-based wireless sensor network system is proposed for the CCD application. The developed CCD system uses Zigbee 802.15.4 protocols, and is able to perform dynamic stress monitoring, structural impact capturing, and internal crack detection. The system has been experimentally validated, and the experimental results demonstrated the effectiveness of the proposed system. This work provides important support for practical CCD applications using wireless smart aggregates.

  12. Experimental violation of Bell inequalities for multi-dimensional systems

    PubMed Central

    Lo, Hsin-Pin; Li, Che-Ming; Yabushita, Atsushi; Chen, Yueh-Nan; Luo, Chih-Wei; Kobayashi, Takayoshi

    2016-01-01

    Quantum correlations between spatially separated parts of a d-dimensional bipartite system (d ≥ 2) have no classical analog. Such correlations, also called entanglements, are not only conceptually important, but also have a profound impact on information science. In theory the violation of Bell inequalities based on local realistic theories for d-dimensional systems provides evidence of quantum nonlocality. Experimental verification is required to confirm whether a quantum system of extremely large dimension can possess this feature, however it has never been performed for large dimension. Here, we report that Bell inequalities are experimentally violated for bipartite quantum systems of dimensionality d = 16 with the usual ensembles of polarization-entangled photon pairs. We also estimate that our entanglement source violates Bell inequalities for extremely high dimensionality of d > 4000. The designed scenario offers a possible new method to investigate the entanglement of multipartite systems of large dimensionality and their application in quantum information processing. PMID:26917246

  13. Low frequency radio synthesis imaging of the galactic center region

    NASA Astrophysics Data System (ADS)

    Nord, Michael Evans

    2005-11-01

    The Very Large Array radio interferometer has been equipped with new receivers to allow observations at 330 and 74 MHz, frequencies much lower than were previously possible with this instrument. Though the VLA dishes are not optimal for working at these frequencies, the system is successful and regular observations are now taken at these frequencies. However, new data analysis techniques are required to work at these frequencies. The technique of self- calibration, used to remove small atmospheric effects at higher frequencies, has been adapted to compensate for ionospheric turbulence in much the same way that adaptive optics is used in the optical regime. Faceted imaging techniques are required to compensate for the noncoplanar image distortion that affects the system due to the wide fields of view at these frequencies (~2.3° at 330 MHz and ~11° at 74 MHz). Furthermore, radio frequency interference is a much larger problem at these frequencies than in higher frequencies and novel approaches to its mitigation are required. These new techniques and new system are allowing for imaging of the radio sky at sensitivities and resolutions orders of magnitude higher than were possible with the low frequency systems of decades past. In this work I discuss the advancements in low frequency data techniques required to make high resolution, high sensitivity, large field of view measurements with the new Very Large Array low frequency system and then detail the results of turning this new system and techniques on the center of our Milky Way Galaxy. At 330 MHz I image the Galactic center region with roughly 10 inches resolution and 1.6 mJy beam -1 sensitivity. New Galactic center nonthermal filaments, new pulsar candidates, and the lowest frequency detection to date of the radio source associated with our Galaxy's central massive black hole result. At 74 MHz I image a region of the sky roughly 40° x 6° with, ~10 feet resolution. I use the high opacity of H II regions at 74 MHz to extract three-dimensional data on the distribution of Galactic cosmic ray emissivity, a measurement possible only at low radio frequencies.

  14. Large space telescope, phase A. Volume 3: Optical telescope assembly

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The development and characteristics of the optical telescope assembly for the Large Space Telescope are discussed. The systems considerations are based on mission-related parameters and optical equipment requirements. Information is included on: (1) structural design and analysis, (2) thermal design, (3) stabilization and control, (4) alignment, focus, and figure control, (5) electronic subsystem, and (6) scientific instrument design.

  15. Equation solvers for distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    A large number of scientific and engineering problems require the rapid solution of large systems of simultaneous equations. The performance of parallel computers in this area now dwarfs traditional vector computers by nearly an order of magnitude. This talk describes the major issues involved in parallel equation solvers with particular emphasis on the Intel Paragon, IBM SP-1 and SP-2 processors.

  16. Adaptive optics using a MEMS deformable mirror for a segmented mirror telescope

    NASA Astrophysics Data System (ADS)

    Miyamura, Norihide

    2017-09-01

    For small satellite remote sensing missions, a large aperture telescope more than 400mm is required to realize less than 1m GSD observations. However, it is difficult or expensive to realize the large aperture telescope using a monolithic primary mirror with high surface accuracy. A segmented mirror telescope should be studied especially for small satellite missions. Generally, not only high accuracy of optical surface but also high accuracy of optical alignment is required for large aperture telescopes. For segmented mirror telescopes, the alignment is more difficult and more important. For conventional systems, the optical alignment is adjusted before launch to achieve desired imaging performance. However, it is difficult to adjust the alignment for large sized optics in high accuracy. Furthermore, thermal environment in orbit and vibration in a launch vehicle cause the misalignments of the optics. We are developing an adaptive optics system using a MEMS deformable mirror for an earth observing remote sensing sensor. An image based adaptive optics system compensates the misalignments and wavefront aberrations of optical elements using the deformable mirror by feedback of observed images. We propose the control algorithm of the deformable mirror for a segmented mirror telescope by using of observed image. The numerical simulation results and experimental results show that misalignment and wavefront aberration of the segmented mirror telescope are corrected and image quality is improved.

  17. Aho-Corasick String Matching on Shared and Distributed Memory Parallel Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Villa, Oreste; Chavarría-Miranda, Daniel

    String matching is at the core of many critical applications, including network intrusion detection systems, search engines, virus scanners, spam filters, DNA and protein sequencing, and data mining. For all of these applications string matching requires a combination of (sometimes all) the following characteristics: high and/or predictable performance, support for large data sets and flexibility of integration and customization. Many software based implementations targeting conventional cache-based microprocessors fail to achieve high and predictable performance requirements, while Field-Programmable Gate Array (FPGA) implementations and dedicated hardware solutions fail to support large data sets (dictionary sizes) and are difficult to integrate and customize.more » The advent of multicore, multithreaded, and GPU-based systems is opening the possibility for software based solutions to reach very high performance at a sustained rate. This paper compares several software-based implementations of the Aho-Corasick string searching algorithm for high performance systems. We discuss the implementation of the algorithm on several types of shared-memory high-performance architectures (Niagara 2, large x86 SMPs and Cray XMT), distributed memory with homogeneous processing elements (InfiniBand cluster of x86 multicores) and heterogeneous processing elements (InfiniBand cluster of x86 multicores with NVIDIA Tesla C10 GPUs). We describe in detail how each solution achieves the objectives of supporting large dictionaries, sustaining high performance, and enabling customization and flexibility using various data sets.« less

  18. Using model based systems engineering for the development of the Large Synoptic Survey Telescope's operational plan

    NASA Astrophysics Data System (ADS)

    Selvy, Brian M.; Claver, Charles; Willman, Beth; Petravick, Don; Johnson, Margaret; Reil, Kevin; Marshall, Stuart; Thomas, Sandrine; Lotz, Paul; Schumacher, German; Lim, Kian-Tat; Jenness, Tim; Jacoby, Suzanne; Emmons, Ben; Axelrod, Tim

    2016-08-01

    We† provide an overview of the Model Based Systems Engineering (MBSE) language, tool, and methodology being used in our development of the Operational Plan for Large Synoptic Survey Telescope (LSST) operations. LSST's Systems Engineering (SE) team is using a model-based approach to operational plan development to: 1) capture the topdown stakeholders' needs and functional allocations defining the scope, required tasks, and personnel needed for operations, and 2) capture the bottom-up operations and maintenance activities required to conduct the LSST survey across its distributed operations sites for the full ten year survey duration. To accomplish these complimentary goals and ensure that they result in self-consistent results, we have developed a holistic approach using the Sparx Enterprise Architect modeling tool and Systems Modeling Language (SysML). This approach utilizes SysML Use Cases, Actors, associated relationships, and Activity Diagrams to document and refine all of the major operations and maintenance activities that will be required to successfully operate the observatory and meet stakeholder expectations. We have developed several customized extensions of the SysML language including the creation of a custom stereotyped Use Case element with unique tagged values, as well as unique association connectors and Actor stereotypes. We demonstrate this customized MBSE methodology enables us to define: 1) the rolls each human Actor must take on to successfully carry out the activities associated with the Use Cases; 2) the skills each Actor must possess; 3) the functional allocation of all required stakeholder activities and Use Cases to organizational entities tasked with carrying them out; and 4) the organization structure required to successfully execute the operational survey. Our approach allows for continual refinement utilizing the systems engineering spiral method to expose finer levels of detail as necessary. For example, the bottom-up, Use Case-driven approach will be deployed in the future to develop the detailed work procedures required to successfully execute each operational activity.

  19. Microgravity fluid management requirements of advanced solar dynamic power systems

    NASA Technical Reports Server (NTRS)

    Migra, Robert P.

    1987-01-01

    The advanced solar dynamic system (ASDS) program is aimed at developing the technology for highly efficient, lightweight space power systems. The approach is to evaluate Stirling, Brayton and liquid metal Rankine power conversion systems (PCS) over the temperature range of 1025 to 1400K, identify the critical technologies and develop these technologies. Microgravity fluid management technology is required in several areas of this program, namely, thermal energy storage (TES), heat pipe applications and liquid metal, two phase flow Rankine systems. Utilization of the heat of fusion of phase change materials offers potential for smaller, lighter TES systems. The candidate TES materials exhibit large volume change with the phase change. The heat pipe is an energy dense heat transfer device. A high temperature application may transfer heat from the solar receiver to the PCS working fluid and/or TES. A low temperature application may transfer waste heat from the PCS to the radiator. The liquid metal Rankine PCS requires management of the boiling/condensing process typical of two phase flow systems.

  20. Actual issues of introduction of continuous emission monitoring systems for control of negative impact of TPP to atmospheric air

    NASA Astrophysics Data System (ADS)

    Kondrateva, O. E.; Roslyakov, P. V.; Borovkova, A. M.; Loktionov, O. A.

    2017-11-01

    Over the past 3 years there have been significant changes in Russian environmental legislation related to the transition to technological regulation based on the principles of the best available technologies (BAT). These changes also imply control and accounting of the harmful impact of industrial enterprises on the environment. Therefore, a mandatory requirement for equipping automatic continuous emission monitoring systems (ACEMS) is established for all large TPPs. For a successful practical solution of the problem of introducing such systems in the whole country there is an urgent need to develop the governing regulatory document for the design and operation of systems for continuous monitoring of TPP emissions into the air, allowing within reasonable limits to unify these systems for their work with the state data fund of state environmental monitoring and make easier the process of their implementation at operating facilities for industrial enterprises. Based on the large amount of research in the field of creation of ACEMS, which conducted in National Research University “MPEI”, a draft guidance document was developed, which includes the following regulatory provisions: goals and objectives of ACEMS, the stages of their introduction rules of carrying out preliminary inspection of energy facilities, requirements to develop technical specifications, general requirements for the operation of ACEMS, requirements to the structure and elements of ACEMS, recommendations on selection of places of measuring equipment installation, rules for execution, commissioning and acceptance testing, continuous measurement method, method for determination of the current gross and specific emissions. The draft guidance document, developed by the National Research University “MPEI”, formed the basis of the Preliminary national standards PNST 187-2017 “Automatic systems for continuous control and metering of contaminants emissions from thermal electric power stations into the atmospheric air. General requirements”. [1

  1. Changes, disruption and innovation: An investigation of the introduction of new health information technology in a microbiology laboratory.

    PubMed

    Toouli, George; Georgiou, Andrew; Westbrook, Johanna

    2012-01-01

    It is expected that health information technology (HIT) will deliver a safer, more efficient and effective health care system. The aim of this study was to undertake a qualitative and video-ethnographic examination of the impact of information technologies on work processes in the reception area of a Microbiology Department, to ascertain what changed, how it changed and the impact of the change. The setting for this study was the microbiology laboratory of a large tertiary hospital in Sydney. The study consisted of qualitative (interview and focus group) data and observation sessions for the period August 2005 to October 2006 along with video footage shot in three sessions covering the original system and the two stages of the Cerner implementation. Data analysis was assisted by NVivo software and process maps were produced from the video footage. There were two laboratory information systems observed in the video footage with computerized provider order entry introduced four months later. Process maps highlighted the large number of pre data entry steps with the original system whilst the newer system incorporated many of these steps in to the data entry stage. However, any time saved with the new system was offset by the requirement to complete some data entry of patient information not previously required. Other changes noted included the change of responsibilities for the reception staff and the physical changes required to accommodate the increased activity around the data entry area. Implementing a new HIT is always an exciting time for any environment but ensuring that the implementation goes smoothly and with minimal trouble requires the administrator and their team to plan well in advance for staff training, physical layout and possible staff resource reallocation.

  2. Purple L1 Milestone Review Panel GPFS Functionality and Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loewe, W E

    2006-12-01

    The GPFS deliverable for the Purple system requires the functionality and performance necessary for ASC I/O needs. The functionality includes POSIX and MPIIO compatibility, and multi-TB file capability across the entire machine. The bandwidth performance required is 122.15 GB/s, as necessary for productive and defensive I/O requirements, and the metadata performance requirement is 5,000 file stats per second. To determine success for this deliverable, several tools are employed. For functionality testing of POSIX, 10TB-files, and high-node-count capability, the parallel file system bandwidth performance test IOR is used. IOR is an MPI-coordinated application that can write and then read to amore » single shared file or to an individual file per process and check the data integrity of the file(s). The MPIIO functionality is tested with the MPIIO test suite from the MPICH library. Bandwidth performance is tested using IOR for the required 122.15 GB/s sustained write. All IOR tests are performanced with data checking enabled. Metadata performance is tested after ''aging'' the file system with 80% data block usage and 20% inode usage. The fdtree metadata test is expected to create/remove a large directory/file structure in under 20 minutes time, akin to interactive metadata usage. Multiple (10) instances of ''ls -lR'', each performing over 100K stats, are run concurrently in different large directories to demonstrate 5,000 stats/sec.« less

  3. Design of a practical model-observer-based image quality assessment method for x-ray computed tomography imaging systems

    PubMed Central

    Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.

    2016-01-01

    Abstract. The use of a channelization mechanism on model observers not only makes mimicking human visual behavior possible, but also reduces the amount of image data needed to estimate the model observer parameters. The channelized Hotelling observer (CHO) and channelized scanning linear observer (CSLO) have recently been used to assess CT image quality for detection tasks and combined detection/estimation tasks, respectively. Although the use of channels substantially reduces the amount of data required to compute image quality, the number of scans required for CT imaging is still not practical for routine use. It is our desire to further reduce the number of scans required to make CHO or CSLO an image quality tool for routine and frequent system validations and evaluations. This work explores different data-reduction schemes and designs an approach that requires only a few CT scans. Three different kinds of approaches are included in this study: a conventional CHO/CSLO technique with a large sample size, a conventional CHO/CSLO technique with fewer samples, and an approach that we will show requires fewer samples to mimic conventional performance with a large sample size. The mean value and standard deviation of areas under ROC/EROC curve were estimated using the well-validated shuffle approach. The results indicate that an 80% data reduction can be achieved without loss of accuracy. This substantial data reduction is a step toward a practical tool for routine-task-based QA/QC CT system assessment. PMID:27493982

  4. Repository Planning, Design, and Engineering: Part II-Equipment and Costing.

    PubMed

    Baird, Phillip M; Gunter, Elaine W

    2016-08-01

    Part II of this article discusses and provides guidance on the equipment and systems necessary to operate a repository. The various types of storage equipment and monitoring and support systems are presented in detail. While the material focuses on the large repository, the requirements for a small-scale startup are also presented. Cost estimates and a cost model for establishing a repository are presented. The cost model presents an expected range of acquisition costs for the large capital items in developing a repository. A range of 5,000-7,000 ft(2) constructed has been assumed, with 50 frozen storage units, to reflect a successful operation with growth potential. No design or engineering costs, permit or regulatory costs, or smaller items such as the computers, software, furniture, phones, and barcode readers required for operations have been included.

  5. Technical needs and research opportunities provided by projected aeronautical and space systems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1992-01-01

    The overall goal of the present task is to identify the enabling and supporting technologies for projected aeronautical and space systems. A detailed examination was made of the technical needs in the structures, dynamics and materials areas required for the realization of these systems. Also, the level of integration required with other disciplines was identified. The aeronautical systems considered cover the broad spectrum of rotorcraft; subsonic, supersonic and hypersonic aircraft; extremely high-altitude aircraft; and transatmospheric vehicles. The space systems considered include space transportation systems; spacecrafts for near-earth observation; spacecrafts for planetary and solar exploration; and large space systems. A monograph is being compiled which summarizes the results of this study. The different chapters of the monograph are being written by leading experts from governmental laboratories, industry and universities.

  6. Comparative Study of Neural Network Frameworks for the Next Generation of Adaptive Optics Systems.

    PubMed

    González-Gutiérrez, Carlos; Santos, Jesús Daniel; Martínez-Zarzuela, Mario; Basden, Alistair G; Osborn, James; Díaz-Pernas, Francisco Javier; De Cos Juez, Francisco Javier

    2017-06-02

    Many of the next generation of adaptive optics systems on large and extremely large telescopes require tomographic techniques in order to correct for atmospheric turbulence over a large field of view. Multi-object adaptive optics is one such technique. In this paper, different implementations of a tomographic reconstructor based on a machine learning architecture named "CARMEN" are presented. Basic concepts of adaptive optics are introduced first, with a short explanation of three different control systems used on real telescopes and the sensors utilised. The operation of the reconstructor, along with the three neural network frameworks used, and the developed CUDA code are detailed. Changes to the size of the reconstructor influence the training and execution time of the neural network. The native CUDA code turns out to be the best choice for all the systems, although some of the other frameworks offer good performance under certain circumstances.

  7. Nonlinear compensation techniques for magnetic suspension systems. Ph.D. Thesis - MIT

    NASA Technical Reports Server (NTRS)

    Trumper, David L.

    1991-01-01

    In aerospace applications, magnetic suspension systems may be required to operate over large variations in air-gap. Thus the nonlinearities inherent in most types of suspensions have a significant effect. Specifically, large variations in operating point may make it difficult to design a linear controller which gives satisfactory stability and performance over a large range of operating points. One way to address this problem is through the use of nonlinear compensation techniques such as feedback linearization. Nonlinear compensators have received limited attention in the magnetic suspension literature. In recent years, progress has been made in the theory of nonlinear control systems, and in the sub-area of feedback linearization. The idea is demonstrated of feedback linearization using a second order suspension system. In the context of the second order suspension, sampling rate issues in the implementation of feedback linearization are examined through simulation.

  8. Comparative Study of Neural Network Frameworks for the Next Generation of Adaptive Optics Systems

    PubMed Central

    González-Gutiérrez, Carlos; Santos, Jesús Daniel; Martínez-Zarzuela, Mario; Basden, Alistair G.; Osborn, James; Díaz-Pernas, Francisco Javier; De Cos Juez, Francisco Javier

    2017-01-01

    Many of the next generation of adaptive optics systems on large and extremely large telescopes require tomographic techniques in order to correct for atmospheric turbulence over a large field of view. Multi-object adaptive optics is one such technique. In this paper, different implementations of a tomographic reconstructor based on a machine learning architecture named “CARMEN” are presented. Basic concepts of adaptive optics are introduced first, with a short explanation of three different control systems used on real telescopes and the sensors utilised. The operation of the reconstructor, along with the three neural network frameworks used, and the developed CUDA code are detailed. Changes to the size of the reconstructor influence the training and execution time of the neural network. The native CUDA code turns out to be the best choice for all the systems, although some of the other frameworks offer good performance under certain circumstances. PMID:28574426

  9. A simple and cheap system to speed up and to control the tumescent technique procedure: the Tedde's system.

    PubMed

    Rubino, C; Marongiu, F; Manzo, M J; Tedde, G; Madonia, M; Campus, G V; Farace, F

    2014-06-01

    We have devised a low cost system to quickly infiltrate tumescent solution: we call it the "Tedde's system". This low-cost system offers an improvement in quality and quantity of the infiltration because all the procedure depends on the operators, reducing also the time of the infiltration and consequently of the whole surgical procedure. Moreover, this system can be applied to other surgical procedure that requires large infiltration volumes.

  10. Rapid Geometry Creation for Computer-Aided Engineering Parametric Analyses: A Case Study Using ComGeom2 for Launch Abort System Design

    NASA Technical Reports Server (NTRS)

    Hawke, Veronica; Gage, Peter; Manning, Ted

    2007-01-01

    ComGeom2, a tool developed to generate Common Geometry representation for multidisciplinary analysis, has been used to create a large set of geometries for use in a design study requiring analysis by two computational codes. This paper describes the process used to generate the large number of configurations and suggests ways to further automate the process and make it more efficient for future studies. The design geometry for this study is the launch abort system of the NASA Crew Launch Vehicle.

  11. Research on computer-aided design of modern marine power systems

    NASA Astrophysics Data System (ADS)

    Ding, Dongdong; Zeng, Fanming; Chen, Guojun

    2004-03-01

    To make the MPS (Marine Power System) design process more economical and easier, a new CAD scheme is brought forward which takes much advantage of VR (Virtual Reality) and AI (Artificial Intelligence) technologies. This CAD system can shorten the period of design and reduce the requirements on designers' experience in large scale. And some key issues like the selection of hardware and software of such a system are discussed.

  12. Medical education and cognitive continuum theory: an alternative perspective on medical problem solving and clinical reasoning.

    PubMed

    Custers, Eugène J F M

    2013-08-01

    Recently, human reasoning, problem solving, and decision making have been viewed as products of two separate systems: "System 1," the unconscious, intuitive, or nonanalytic system, and "System 2," the conscious, analytic, or reflective system. This view has penetrated the medical education literature, yet the idea of two independent dichotomous cognitive systems is not entirely without problems.This article outlines the difficulties of this "two-system view" and presents an alternative, developed by K.R. Hammond and colleagues, called cognitive continuum theory (CCT). CCT is featured by three key assumptions. First, human reasoning, problem solving, and decision making can be arranged on a cognitive continuum, with pure intuition at one end, pure analysis at the other, and a large middle ground called "quasirationality." Second, the nature and requirements of the cognitive task, as perceived by the person performing the task, determine to a large extent whether a task will be approached more intuitively or more analytically. Third, for optimal task performance, this approach needs to match the cognitive properties and requirements of the task. Finally, the author makes a case that CCT is better able than a two-system view to describe medical problem solving and clinical reasoning and that it provides clear clues for how to organize training in clinical reasoning.

  13. Water extraction on Mars for an expanding human colony

    NASA Astrophysics Data System (ADS)

    Ralphs, M.; Franz, B.; Baker, T.; Howe, S.

    2015-11-01

    In-situ water extraction is necessary for an extended human presence on Mars. This study looks at the water requirements of an expanding human colony on Mars and the general systems needed to supply that water from the martian atmosphere and regolith. The proposed combination of systems in order to supply the necessary water includes a system similar to Honeybee Robotics' Mobile In-Situ Water Extractor (MISWE) that uses convection, a system similar to MISWE but that directs microwave energy down a borehole, a greenhouse or hothouse type system, and a system similar to the Mars Atmospheric Resource Recovery System (MARRS). It is demonstrated that a large water extraction system that can take advantage of large deposits of water ice at site specific locations is necessary to keep up with the demands of a growing colony.

  14. A study of the viability of exploiting memory content similarity to improve resilience to memory errors

    DOE PAGES

    Levy, Scott; Ferreira, Kurt B.; Bridges, Patrick G.; ...

    2014-12-09

    Building the next-generation of extreme-scale distributed systems will require overcoming several challenges related to system resilience. As the number of processors in these systems grow, the failure rate increases proportionally. One of the most common sources of failure in large-scale systems is memory. In this paper, we propose a novel runtime for transparently exploiting memory content similarity to improve system resilience by reducing the rate at which memory errors lead to node failure. We evaluate the viability of this approach by examining memory snapshots collected from eight high-performance computing (HPC) applications and two important HPC operating systems. Based on themore » characteristics of the similarity uncovered, we conclude that our proposed approach shows promise for addressing system resilience in large-scale systems.« less

  15. Eyeglass Large Aperture, Lightweight Space Optics FY2000 - FY2002 LDRD Strategic Initiative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hyde, R

    2003-02-10

    A series of studies by the Air Force, the National Reconnaissance Office and NASA have identified the critical role played by large optics in fulfilling many of the space related missions of these agencies. Whether it is the Next Generation Space Telescope for NASA, high resolution imaging systems for NRO, or beam weaponry for the Air Force, the diameter of the primary optic is central to achieving high resolution (imaging) or a small spot size on target (lethality). While the detailed requirements differ for each application (high resolution imaging over the visible and near-infrared for earth observation, high damage thresholdmore » but single-wavelength operation for directed energy), the challenges of a large, lightweight primary optic which is space compatible and operates with high efficiency are the same. The advantage of such large optics to national surveillance applications is that it permits these observations to be carried-out with much greater effectiveness than with smaller optics. For laser weapons, the advantage is that it permits more tightly focused beams which can be leveraged into either greater effective range, reduced laser power, and/or smaller on-target spot-sizes; weapon systems can be made either much more effective or much less expensive. This application requires only single-wavelength capability, but places an emphasis upon robust, rapidly targetable optics. The advantages of large aperture optics to astronomy are that it increases the sensitivity and resolution with which we can view the universe. This can be utilized either for general purpose astronomy, allowing us to examine greater numbers of objects in more detail and at greater range, or it can enable the direct detection and detailed examination of extra-solar planets. This application requires large apertures (for both light-gathering and resolution reasons), with broad-band spectral capability, but does not emphasize either large fields-of-view or pointing agility. Despite differences in their requirements and implementations, the fundamental difficulty in utilizing large aperture optics is the same for all of these applications: It is extremely difficult to design large aperture space optics which are both optically precise and can meet the practical requirements for launch and deployment in space. At LLNL we have developed a new concept (Eyeglass) which uses large diffractive optics to solve both of these difficulties; greatly reducing both the mass and the tolerance requirements for large aperture optics. During previous LDRD-supported research, we developed this concept, built and tested broadband diffractive telescopes, and built 50 cm aperture diffraction-limited diffractive lenses (the largest in the world). This work is fully described in UCRL-ID-136262, Eyeglass: A Large Aperture Space Telescope. However, there is a large gap between optical proof-of-principle with sub-meter apertures, and actual 50 meter space telescopes. This gap is far too large (both in financial resources and in spacecraft expertise) to be filled internally at LLNL; implementation of large aperture diffractive space telescopes must be done externally using non-LLNL resources and expertise. While LLNL will never become the primary contractor and integrator for large space optical systems, our natural role is to enable these devices by developing the capability of producing very large diffractive optics. Accordingly, the purpose of the Large Aperture, Lightweight Space Optics Strategic Initiative was to develop the technology to fabricate large, lightweight diffractive lenses. The additional purpose of this Strategic Initiative was, of course, to demonstrate this lens-fabrication capability in a fashion compellingly enough to attract the external support necessary to continue along the path to full-scale space-based telescopes. During this 3 year effort (FY2000-FY2002) we have developed the capability of optically smoothing and diffractively-patterning thin meter-sized sheets of glass into lens panels. We have also developed alignment and seaming techniques which allow individual lens panels to be assembled together, forming a much larger, segmented, diffractive lens. The capabilities provided by this LDRD-supported developmental effort were then demonstrated by the fabrication and testing of a lightweight, 5 meter aperture, diffractive lens.« less

  16. Design of a practical model-observer-based image quality assessment method for CT imaging systems

    NASA Astrophysics Data System (ADS)

    Tseng, Hsin-Wu; Fan, Jiahua; Cao, Guangzhi; Kupinski, Matthew A.; Sainath, Paavana

    2014-03-01

    The channelized Hotelling observer (CHO) is a powerful method for quantitative image quality evaluations of CT systems and their image reconstruction algorithms. It has recently been used to validate the dose reduction capability of iterative image-reconstruction algorithms implemented on CT imaging systems. The use of the CHO for routine and frequent system evaluations is desirable both for quality assurance evaluations as well as further system optimizations. The use of channels substantially reduces the amount of data required to achieve accurate estimates of observer performance. However, the number of scans required is still large even with the use of channels. This work explores different data reduction schemes and designs a new approach that requires only a few CT scans of a phantom. For this work, the leave-one-out likelihood (LOOL) method developed by Hoffbeck and Landgrebe is studied as an efficient method of estimating the covariance matrices needed to compute CHO performance. Three different kinds of approaches are included in the study: a conventional CHO estimation technique with a large sample size, a conventional technique with fewer samples, and the new LOOL-based approach with fewer samples. The mean value and standard deviation of area under ROC curve (AUC) is estimated by shuffle method. Both simulation and real data results indicate that an 80% data reduction can be achieved without loss of accuracy. This data reduction makes the proposed approach a practical tool for routine CT system assessment.

  17. Novel Liquid Sorbent C02 Removal System for Microgravity Applications

    NASA Technical Reports Server (NTRS)

    Rogers, Tanya; Westover, Shayne; Graf, John

    2017-01-01

    Removing Carbon Dioxide (CO2) from a spacecraft environment for deep space exploration requires a robust system that is low in weight, power, and volume. Current state-of-the-art microgravity compatible CO2 removal systems, such as the carbon dioxide removal assembly (CDRA), utilize solid sorbents that demand high power usage due to high desorption temperatures and a large volume to accommodate for their comparatively low capacity for CO2. Additionally, solid sorbent systems contain several mechanical components that significantly reduce reliability and contribute to a large overall mass. A liquid sorbent based system has been evaluated as an alternative is proposed to consume 65% less power, weight, and volume than solid based CO2 scrubbers. This paper presents the design of a liquid sorbent CO2 removal system for microgravity applications.

  18. Internet based ECG medical information system.

    PubMed

    James, D A; Rowlands, D; Mahnovetski, R; Channells, J; Cutmore, T

    2003-03-01

    Physiological monitoring of humans for medical applications is well established and ready to be adapted to the Internet. This paper describes the implementation of a Medical Information System (MIS-ECG system) incorporating an Internet based ECG acquisition device. Traditionally clinical monitoring of ECG is largely a labour intensive process with data being typically stored on paper. Until recently, ECG monitoring applications have also been constrained somewhat by the size of the equipment required. Today's technology enables large and fixed hospital monitoring systems to be replaced by small portable devices. With an increasing emphasis on health management a truly integrated information system for the acquisition, analysis, patient particulars and archiving is now a realistic possibility. This paper describes recent Internet and technological advances and presents the design and testing of the MIS-ECG system that utilises those advances.

  19. Hazardous Materials Pharmacies - A Vital Component of a Robust P2 Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCarter, S.

    2006-07-01

    Integrating pollution prevention (P2) into the Department of Energy Integrated Safety Management (ISM) - Environmental Management System (EMS) approach, required by DOE Order 450.1, leads to an enhanced ISM program at large and complex installations and facilities. One of the building blocks to integrating P2 into a comprehensive environmental and safety program is the control and tracking of the amounts, types, and flow of hazardous materials used on a facility. Hazardous materials pharmacies (typically called HazMarts) provide a solid approach to resolving this issue through business practice changes that reduce use, avoid excess, and redistribute surplus. If understood from conceptmore » to implementation, the HazMart is a powerful tool for reducing pollution at the source, tracking inventory storage, controlling usage and flow, and summarizing data for reporting requirements. Pharmacy options can range from a strict, single control point for all hazardous materials to a virtual system, where the inventory is user controlled and reported over a common system. Designing and implementing HazMarts on large, diverse installations or facilities present a unique set of issues. This is especially true of research and development (R and D) facilities where the chemical use requirements are extensive and often classified. There are often multiple sources of supply; a wide variety of chemical requirements; a mix of containers ranging from small ampoules to large bulk storage tanks; and a wide range of tools used to track hazardous materials, ranging from simple purchase inventories to sophisticated tracking software. Computer systems are often not uniform in capacity, capability, or operating systems, making it difficult to use a server-based unified tracking system software. Each of these issues has a solution or set of solutions tied to fundamental business practices. Each requires an understanding of the problem at hand, which, in turn, requires good communication among all potential users. A key attribute to a successful HazMart is that everybody must use the same program. That requirement often runs directly into the biggest issue of all... institutional resistance to change. To be successful, the program has to be both a top-down and bottom-up driven process. The installation or facility must set the policy and the requirement, but all of the players have to buy in and participate in building and implementing the program. Dynamac's years of experience assessing hazardous materials programs, providing business case analyses, and recommending and implementing pharmacy approaches for federal agencies has provided us with key insights into the issues, problems, and the array of solutions available. This paper presents the key steps required to implement a HazMart, explores the advantages and pitfalls associated with a HazMart, and presents some options for implementing a pharmacy or HazMart on complex installations and R and D facilities. (authors)« less

  20. Reversible Parallel Discrete-Event Execution of Large-scale Epidemic Outbreak Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Seal, Sudip K

    2010-01-01

    The spatial scale, runtime speed and behavioral detail of epidemic outbreak simulations together require the use of large-scale parallel processing. In this paper, an optimistic parallel discrete event execution of a reaction-diffusion simulation model of epidemic outbreaks is presented, with an implementation over themore » $$\\mu$$sik simulator. Rollback support is achieved with the development of a novel reversible model that combines reverse computation with a small amount of incremental state saving. Parallel speedup and other runtime performance metrics of the simulation are tested on a small (8,192-core) Blue Gene / P system, while scalability is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes (up to several hundred million individuals in the largest case) are exercised.« less

  1. Graph theory approach to the eigenvalue problem of large space structures

    NASA Technical Reports Server (NTRS)

    Reddy, A. S. S. R.; Bainum, P. M.

    1981-01-01

    Graph theory is used to obtain numerical solutions to eigenvalue problems of large space structures (LSS) characterized by a state vector of large dimensions. The LSS are considered as large, flexible systems requiring both orientation and surface shape control. Graphic interpretation of the determinant of a matrix is employed to reduce a higher dimensional matrix into combinations of smaller dimensional sub-matrices. The reduction is implemented by means of a Boolean equivalent of the original matrices formulated to obtain smaller dimensional equivalents of the original numerical matrix. Computation time becomes less and more accurate solutions are possible. An example is provided in the form of a free-free square plate. Linearized system equations and numerical values of a stiffness matrix are presented, featuring a state vector with 16 components.

  2. Hierarchical Time-Lagged Independent Component Analysis: Computing Slow Modes and Reaction Coordinates for Large Molecular Systems.

    PubMed

    Pérez-Hernández, Guillermo; Noé, Frank

    2016-12-13

    Analysis of molecular dynamics, for example using Markov models, often requires the identification of order parameters that are good indicators of the rare events, i.e. good reaction coordinates. Recently, it has been shown that the time-lagged independent component analysis (TICA) finds the linear combinations of input coordinates that optimally represent the slow kinetic modes and may serve in order to define reaction coordinates between the metastable states of the molecular system. A limitation of the method is that both computing time and memory requirements scale with the square of the number of input features. For large protein systems, this exacerbates the use of extensive feature sets such as the distances between all pairs of residues or even heavy atoms. Here we derive a hierarchical TICA (hTICA) method that approximates the full TICA solution by a hierarchical, divide-and-conquer calculation. By using hTICA on distances between heavy atoms we identify previously unknown relaxation processes in the bovine pancreatic trypsin inhibitor.

  3. Localization of multiple defects using the compact phased array (CPA) method

    NASA Astrophysics Data System (ADS)

    Senyurek, Volkan Y.; Baghalian, Amin; Tashakori, Shervin; McDaniel, Dwayne; Tansel, Ibrahim N.

    2018-01-01

    Array systems of transducers have found numerous applications in detection and localization of defects in structural health monitoring (SHM) of plate-like structures. Different types of array configurations and analysis algorithms have been used to improve the process of localization of defects. For accurate and reliable monitoring of large structures by array systems, a high number of actuator and sensor elements are often required. In this study, a compact phased array system consisting of only three piezoelectric elements is used in conjunction with an updated total focusing method (TFM) for localization of single and multiple defects in an aluminum plate. The accuracy of the localization process was greatly improved by including wave propagation information in TFM. Results indicated that the proposed CPA approach can locate single and multiple defects with high accuracy while decreasing the processing costs and the number of required transducers. This method can be utilized in critical applications such as aerospace structures where the use of a large number of transducers is not desirable.

  4. San Juan National Forest Land Management Planning Support System (LMPSS) requirements definition

    NASA Technical Reports Server (NTRS)

    Werth, L. F. (Principal Investigator)

    1981-01-01

    The role of remote sensing data as it relates to a three-component land management planning system (geographic information, data base management, and planning model) can be understood only when user requirements are known. Personnel at the San Juan National Forest in southwestern Colorado were interviewed to determine data needs for managing and monitoring timber, rangelands, wildlife, fisheries, soils, water, geology and recreation facilities. While all the information required for land management planning cannot be obtained using remote sensing techniques, valuable information can be provided for the geographic information system. A wide range of sensors such as small and large format cameras, synthetic aperture radar, and LANDSAT data should be utilized. Because of the detail and accuracy required, high altitude color infrared photography should serve as the baseline data base and be supplemented and updated with data from the other sensors.

  5. Economics of ion propulsion for large space systems

    NASA Technical Reports Server (NTRS)

    Masek, T. D.; Ward, J. W.; Rawlin, V. K.

    1978-01-01

    This study of advanced electrostatic ion thrusters for space propulsion was initiated to determine the suitability of the baseline 30-cm thruster for future missions and to identify other thruster concepts that would better satisfy mission requirements. The general scope of the study was to review mission requirements, select thruster designs to meet these requirements, assess the associated thruster technology requirements, and recommend short- and long-term technology directions that would support future thruster needs. Preliminary design concepts for several advanced thrusters were developed to assess the potential practical difficulties of a new design. This study produced useful general methodologies for assessing both planetary and earth orbit missions. For planetary missions, the assessment is in terms of payload performance as a function of propulsion system technology level. For earth orbit missions, the assessment is made on the basis of cost (cost sensitivity to propulsion system technology level).

  6. Wind tunnel investigation of a high lift system with pneumatic flow control

    NASA Astrophysics Data System (ADS)

    Victor, Pricop Mihai; Mircea, Boscoianu; Daniel-Eugeniu, Crunteanu

    2016-06-01

    Next generation passenger aircrafts require more efficient high lift systems under size and mass constraints, to achieve more fuel efficiency. This can be obtained in various ways: to improve/maintain aerodynamic performance while simplifying the mechanical design of the high lift system going to a single slotted flap, to maintain complexity and improve the aerodynamics even more, etc. Laminar wings have less efficient leading edge high lift systems if any, requiring more performance from the trailing edge flap. Pulsed blowing active flow control (AFC) in the gap of single element flap is investigated for a relatively large model. A wind tunnel model, test campaign and results and conclusion are presented.

  7. Optically controlled phased-array antenna technology for space communication systems

    NASA Technical Reports Server (NTRS)

    Kunath, Richard R.; Bhasin, Kul B.

    1988-01-01

    Using MMICs in phased-array applications above 20 GHz requires complex RF and control signal distribution systems. Conventional waveguide, coaxial cable, and microstrip methods are undesirable due to their high weight, high loss, limited mechanical flexibility and large volume. An attractive alternative to these transmission media, for RF and control signal distribution in MMIC phased-array antennas, is optical fiber. Presented are potential system architectures and their associated characteristics. The status of high frequency opto-electronic components needed to realize the potential system architectures is also discussed. It is concluded that an optical fiber network will reduce weight and complexity, and increase reliability and performance, but may require higher power.

  8. Attitude control challenges for earth orbiters of the 1980's

    NASA Technical Reports Server (NTRS)

    Hibbard, W.

    1980-01-01

    Experience gained in designing attitude control systems for orbiting spacecraft of the late 1980's is related. Implications for satellite attitude control design of the guidance capabilities, rendezvous and recovery requirements, use of multiple-use spacecraft and the development of large spacecraft associated with the advent of the Space Shuttle are considered. Attention is then given to satellite attitude control requirements posed by the Tracking and Data Relay Satellite System, the Global Positioning System, the NASA End-to-End Data System, and Shuttle-associated subsatellites. The anticipated completion and launch of the Space Telescope, which will provide one of the first experiences with the new generation of attitude control, is also pointed out.

  9. Optimize Resources and Help Reduce Cost of Ownership with Dell[TM] Systems Management

    ERIC Educational Resources Information Center

    Technology & Learning, 2008

    2008-01-01

    Maintaining secure, convenient administration of the PC system environment can be a significant drain on resources. Deskside visits can greatly increase the cost of supporting a large number of computers. Even simple tasks, such as tracking inventory or updating software, quickly become expensive when they require physically visiting every…

  10. Online Class Review: Using Streaming-Media Technology

    ERIC Educational Resources Information Center

    Loudon, Marc; Sharp, Mark

    2006-01-01

    We present an automated system that allows students to replay both audio and video from a large nonmajors' organic chemistry class as streaming RealMedia. Once established, this system requires no technical intervention and is virtually transparent to the instructor. This gives students access to online class review at any time. Assessment has…

  11. A Study of Students' Reasoning about Probabilistic Causality: Implications for Understanding Complex Systems and for Instructional Design

    ERIC Educational Resources Information Center

    Grotzer, Tina A.; Solis, S. Lynneth; Tutwiler, M. Shane; Cuzzolino, Megan Powell

    2017-01-01

    Understanding complex systems requires reasoning about causal relationships that behave or appear to behave probabilistically. Features such as distributed agency, large spatial scales, and time delays obscure co-variation relationships and complex interactions can result in non-deterministic relationships between causes and effects that are best…

  12. Keeping trees as assets

    Treesearch

    Kevin T. Smith

    2009-01-01

    Landscape trees have real value and contribute to making livable communities. Making the most of that value requires providing trees with the proper care and attention. As potentially large and long-lived organisms, trees benefit from commitment to regular care that respects the natural tree system. This system captures, transforms, and uses energy to survive, grow,...

  13. 7 CFR 4280.103 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... feet. Large wind system. A wind energy project for which the rated power of the individual wind turbine... system for which the rated power of the wind turbine is 100kW or smaller and with a generator hub height... applicable law and land management plans and the requirements for old-growth maintenance, restoration, and...

  14. 7 CFR 4280.103 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... feet. Large wind system. A wind energy project for which the rated power of the individual wind turbine... system for which the rated power of the wind turbine is 100kW or smaller and with a generator hub height... applicable law and land management plans and the requirements for old-growth maintenance, restoration, and...

  15. 7 CFR 4280.103 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... feet. Large wind system. A wind energy project for which the rated power of the individual wind turbine... system for which the rated power of the wind turbine is 100kW or smaller and with a generator hub height... applicable law and land management plans and the requirements for old-growth maintenance, restoration, and...

  16. Site Management and Productivity in Tropical Forest Plantations

    Treesearch

    A. Tiarks; E.K.S. Nambiar; C. Cossalter

    1998-01-01

    Tropical countries are expanding plantation forestry to develop sustainable woodproduction systems. Much of this is based on short rotations of exotic species. These systems require large capital investments, represent intensive land use and increase the demands on the soil. To develop options for maintaining or increasing productivity a partner-project was initiated...

  17. Knowledge Production within the Innovation System: A Case Study from the United Kingdom

    ERIC Educational Resources Information Center

    Wilson-Medhurst, Sarah

    2010-01-01

    This paper focuses on a key issue for university managers, educational developers and teaching practitioners: that of producing new operational knowledge in the innovation system. More specifically, it explores the knowledge required to guide individual and institutional styles of teaching and learning in a large multi-disciplinary faculty. The…

  18. Species-Specific Elements in the Large T-Antigen J Domain Are Required for Cellular Transformation and DNA Replication by Simian Virus 40

    PubMed Central

    Sullivan, Christopher S.; Tremblay, James D.; Fewell, Sheara W.; Lewis, John A.; Brodsky, Jeffrey L.; Pipas, James M.

    2000-01-01

    The J domain of simian virus 40 (SV40) large T antigen is required for efficient DNA replication and transformation. Despite previous reports demonstrating the promiscuity of J domains in heterologous systems, results presented here show the requirement for specific J-domain sequences in SV40 large-T-antigen-mediated activities. In particular, chimeric-T-antigen constructs in which the SV40 T-antigen J domain was replaced with that from the yeast Ydj1p or Escherichia coli DnaJ proteins failed to replicate in BSC40 cells and did not transform REF52 cells. However, T antigen containing the JC virus J domain was functional in these assays, although it was less efficient than the wild type. The inability of some large-T-antigen chimeras to promote DNA replication and elicit cellular transformation was not due to a failure to interact with hsc70, since a nonfunctional chimera, containing the DnaJ J domain, bound hsc70. However, this nonfunctional chimeric T antigen was reduced in its ability to stimulate hsc70 ATPase activity and unable to liberate E2F from p130, indicating that transcriptional activation of factors required for cell growth and DNA replication may be compromised. Our data suggest that the T-antigen J domain harbors species-specific elements required for viral activities in vivo. PMID:10891510

  19. The adaption and use of research codes for performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebetrau, A.M.

    1987-05-01

    Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less

  20. Random access in large-scale DNA data storage.

    PubMed

    Organick, Lee; Ang, Siena Dumas; Chen, Yuan-Jyue; Lopez, Randolph; Yekhanin, Sergey; Makarychev, Konstantin; Racz, Miklos Z; Kamath, Govinda; Gopalan, Parikshit; Nguyen, Bichlien; Takahashi, Christopher N; Newman, Sharon; Parker, Hsing-Yeh; Rashtchian, Cyrus; Stewart, Kendall; Gupta, Gagan; Carlson, Robert; Mulligan, John; Carmean, Douglas; Seelig, Georg; Ceze, Luis; Strauss, Karin

    2018-03-01

    Synthetic DNA is durable and can encode digital data with high density, making it an attractive medium for data storage. However, recovering stored data on a large-scale currently requires all the DNA in a pool to be sequenced, even if only a subset of the information needs to be extracted. Here, we encode and store 35 distinct files (over 200 MB of data), in more than 13 million DNA oligonucleotides, and show that we can recover each file individually and with no errors, using a random access approach. We design and validate a large library of primers that enable individual recovery of all files stored within the DNA. We also develop an algorithm that greatly reduces the sequencing read coverage required for error-free decoding by maximizing information from all sequence reads. These advances demonstrate a viable, large-scale system for DNA data storage and retrieval.

  1. Large-Scale Cryogenic Testing of Launch Vehicle Ground Systems at the Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Ernst, E. W.; Sass, J. P.; Lobemeyer, D. A.; Sojourner, S. J.; Hatfield, W. H.; Rewinkel, D. A.

    2007-01-01

    The development of a new launch vehicle to support NASA's future exploration plans requires significant redesign and upgrade of Kennedy Space Center's (KSC) launch pad and ground support equipment systems. In many cases, specialized test equipment and systems will be required to certify the function of the new system designs under simulated operational conditions, including propellant loading. This paper provides an overview of the cryogenic test infrastructure that is in place at KSC to conduct development and qualification testing that ranges from the component level to the integrated-system level. An overview of the major cryogenic test facilities will be provided, along with a detailed explanation of the technology focus area for each facility

  2. Picture Archiving And Communication Systems (PACS): Introductory Systems Analysis Considerations

    NASA Astrophysics Data System (ADS)

    Hughes, Simon H. C.

    1983-05-01

    Two fundamental problems face any hospital or radiology department that is thinking about installing a Picture Archiving and Communications System (PACS). First, though the need for PACS already exists, much of the relevant technology is just beginning to be developed. Second, the requirements of each hospital are different, so that any attempts to market a single PACS design for use in large numbers of hospitals are likely to meet with the same problems as were experienced with general-purpose Hospital Information Systems. This paper outlines some of the decision processes involved in arriving at specifications for each module of a PACS and indicates design principles which should be followed in order to meet individual hospital requirements, while avoiding the danger of short-term systems obsolescence.

  3. Solar hot water systems application to the solar building test facility and the Tech House

    NASA Technical Reports Server (NTRS)

    Goble, R. L.; Jensen, R. N.; Basford, R. C.

    1976-01-01

    Projects which relate to the current national thrust toward demonstrating applied solar energy are discussed. The first project has as its primary objective the application of a system comprised of a flat plate collector field, an absorption air conditioning system, and a hot water heating system to satisfy most of the annual cooling and heating requirements of a large commercial office building. The other project addresses the application of solar collector technology to the heating and hot water requirements of a domestic residence. In this case, however, the solar system represents only one of several important technology items, the primary objective for the project being the application of space technology to the American home.

  4. Automated screening of propulsion system test data by neural networks, phase 1

    NASA Technical Reports Server (NTRS)

    Hoyt, W. Andes; Whitehead, Bruce A.

    1992-01-01

    The evaluation of propulsion system test and flight performance data involves reviewing an extremely large volume of sensor data generated by each test. An automated system that screens large volumes of data and identifies propulsion system parameters which appear unusual or anomalous will increase the productivity of data analysis. Data analysts may then focus on a smaller subset of anomalous data for further evaluation of propulsion system tests. Such an automated data screening system would give NASA the benefit of a reduction in the manpower and time required to complete a propulsion system data evaluation. A phase 1 effort to develop a prototype data screening system is reported. Neural networks will detect anomalies based on nominal propulsion system data only. It appears that a reasonable goal for an operational system would be to screen out 95 pct. of the nominal data, leaving less than 5 pct. needing further analysis by human experts.

  5. Mini-review: high rate algal ponds, flexible systems for sustainable wastewater treatment.

    PubMed

    Young, P; Taylor, M; Fallowfield, H J

    2017-06-01

    Over the last 20 years, there has been a growing requirement by governments around the world for organisations to adopt more sustainable practices. Wastewater treatment is no exception, with many currently used systems requiring large capital investment, land area and power consumption. High rate algal ponds offer a sustainable, efficient and lower cost option to the systems currently in use. They are shallow, mixed lagoon based systems, which aim to maximise wastewater treatment by creating optimal conditions for algal growth and oxygen production-the key processes which remove nitrogen and organic waste in HRAP systems. This design means they can treat wastewater to an acceptable quality within a fifth of time of other lagoon systems while using 50% less surface area. This smaller land requirement decreases both the construction costs and evaporative water losses, making larger volumes of treated water available for beneficial reuse. They are ideal for rural, peri-urban and remote communities as they require minimum power and little on-site management. This review will address the history of and current trends in high rate algal pond development and application; a comparison of their performance with other systems when treating various wastewaters; and discuss their potential for production of added-value products. Finally, the review will consider areas requiring further research.

  6. Requirements Development Issues for Advanced Life Support Systems: Solid Waste Management

    NASA Technical Reports Server (NTRS)

    Levri, Julie A.; Fisher, John W.; Alazraki, Michael P.; Hogan, John A.

    2002-01-01

    Long duration missions pose substantial new challenges for solid waste management in Advanced Life Support (ALS) systems. These possibly include storing large volumes of waste material in a safe manner, rendering wastes stable or sterilized for extended periods of time, and/or processing wastes for recovery of vital resources. This is further complicated because future missions remain ill-defined with respect to waste stream quantity, composition and generation schedule. Without definitive knowledge of this information, development of requirements is hampered. Additionally, even if waste streams were well characterized, other operational and processing needs require clarification (e.g. resource recovery requirements, planetary protection constraints). Therefore, the development of solid waste management (SWM) subsystem requirements for long duration space missions is an inherently uncertain, complex and iterative process. The intent of this paper is to address some of the difficulties in writing requirements for missions that are not completely defined. This paper discusses an approach and motivation for ALS SWM requirements development, the characteristics of effective requirements, and the presence of those characteristics in requirements that are developed for uncertain missions. Associated drivers for life support system technological capability are also presented. A general means of requirements forecasting is discussed, including successive modification of requirements and the need to consider requirements integration among subsystems.

  7. Feasibility study of launch vehicle ground cloud neutralization

    NASA Technical Reports Server (NTRS)

    Vanderarend, P. C.; Stoy, S. T.; Kranyecz, T. E.

    1976-01-01

    The distribution of hydrogen chloride in the cloud was analyzed as a function of launch pad geometry and rate of rise of the vehicle during the first 24 sec of burn in order to define neutralization requirements. Delivery systems of various types were developed in order to bring the proposed chemical agents in close contact with the hydrogen chloride. Approximately one-third of the total neutralizing agent required can be delivered from a ground installed system at the launch pad; concentrated sodium carbonate solution is the preferred choice of agent for this launch pad system. Two-thirds of the neutralization requirement appears to need delivery by aircraft. Only one chemical agent (ammonia) may be reasonably considered for delivery by aircraft, because weight and bulk of all other agents are too large.

  8. Automated Design of Complex Dynamic Systems

    PubMed Central

    Hermans, Michiel; Schrauwen, Benjamin; Bienstman, Peter; Dambre, Joni

    2014-01-01

    Several fields of study are concerned with uniting the concept of computation with that of the design of physical systems. For example, a recent trend in robotics is to design robots in such a way that they require a minimal control effort. Another example is found in the domain of photonics, where recent efforts try to benefit directly from the complex nonlinear dynamics to achieve more efficient signal processing. The underlying goal of these and similar research efforts is to internalize a large part of the necessary computations within the physical system itself by exploiting its inherent non-linear dynamics. This, however, often requires the optimization of large numbers of system parameters, related to both the system's structure as well as its material properties. In addition, many of these parameters are subject to fabrication variability or to variations through time. In this paper we apply a machine learning algorithm to optimize physical dynamic systems. We show that such algorithms, which are normally applied on abstract computational entities, can be extended to the field of differential equations and used to optimize an associated set of parameters which determine their behavior. We show that machine learning training methodologies are highly useful in designing robust systems, and we provide a set of both simple and complex examples using models of physical dynamical systems. Interestingly, the derived optimization method is intimately related to direct collocation a method known in the field of optimal control. Our work suggests that the application domains of both machine learning and optimal control have a largely unexplored overlapping area which envelopes a novel design methodology of smart and highly complex physical systems. PMID:24497969

  9. Empirical testing of an analytical model predicting electrical isolation of photovoltaic models

    NASA Astrophysics Data System (ADS)

    Garcia, A., III; Minning, C. P.; Cuddihy, E. F.

    A major design requirement for photovoltaic modules is that the encapsulation system be capable of withstanding large DC potentials without electrical breakdown. Presented is a simple analytical model which can be used to estimate material thickness to meet this requirement for a candidate encapsulation system or to predict the breakdown voltage of an existing module design. A series of electrical tests to verify the model are described in detail. The results of these verification tests confirmed the utility of the analytical model for preliminary design of photovoltaic modules.

  10. Digital Holographic Demonstration Systems by Stanford University and Siros Technologies

    NASA Astrophysics Data System (ADS)

    Hesselink, L.

    Its useful capacity, transfer rate and access time measure the performance of a holographic data storage system (HDSS). Data should never be lost, requiring a corrected bit error rate (BER) of 10-12 to 10-15. To compete successfully in the large storage marketplace, an HDS drive should be cost-competitive with improved performance over other drives. The exception could be certain niche markets, where unique HDS attributes — all-solid-state implementation with extremely short access times or associative retrieval — are attractive or required.

  11. Application of importance sampling to the computation of large deviations in nonequilibrium processes.

    PubMed

    Kundu, Anupam; Sabhapandit, Sanjib; Dhar, Abhishek

    2011-03-01

    We present an algorithm for finding the probabilities of rare events in nonequilibrium processes. The algorithm consists of evolving the system with a modified dynamics for which the required event occurs more frequently. By keeping track of the relative weight of phase-space trajectories generated by the modified and the original dynamics one can obtain the required probabilities. The algorithm is tested on two model systems of steady-state particle and heat transport where we find a huge improvement from direct simulation methods.

  12. The Design and Development of a Management Information System for the Monterey Navy Flying Club.

    DTIC Science & Technology

    1986-03-27

    Management Information System for the Monterey Navy Flying Club. It supplies the tools necessary to enable the club manager to maintain all club records and generate required administrative and financial reports. The Monterey Navy Flying Club has one of the largest memberships of the Navy sponsored flying clubs. As a result of this large membership and the amount of manual paperwork required to properly maintain club records, the Manager’s ability to provide necessary services and reports in severely hampered. The implementation of an efficient

  13. A light-weight compact proton gantry design with a novel dose delivery system for broad-energetic laser-accelerated beams

    NASA Astrophysics Data System (ADS)

    Masood, U.; Cowan, T. E.; Enghardt, W.; Hofmann, K. M.; Karsch, L.; Kroll, F.; Schramm, U.; Wilkens, J. J.; Pawelke, J.

    2017-07-01

    Proton beams may provide superior dose-conformity in radiation therapy. However, the large sizes and costs limit the widespread use of proton therapy (PT). The recent progress in proton acceleration via high-power laser systems has made it a compelling alternative to conventional accelerators, as it could potentially reduce the overall size and cost of the PT facilities. However, the laser-accelerated beams exhibit different characteristics than conventionally accelerated beams, i.e. very intense proton bunches with large divergences and broad-energy spectra. For the application of laser-driven beams in PT, new solutions for beam transport, such as beam capture, integrated energy selection, beam shaping and delivery systems are required due to the specific beam parameters. The generation of these beams are limited by the low repetition rate of high-power lasers and this limitation would require alternative solutions for tumour irradiation which can efficiently utilize the available high proton fluence and broad-energy spectra per proton bunch to keep treatment times short. This demands new dose delivery system and irradiation field formation schemes. In this paper, we present a multi-functional light-weight and compact proton gantry design for laser-driven sources based on iron-less pulsed high-field magnets. This achromatic design includes improved beam capturing and energy selection systems, with a novel beam shaping and dose delivery system, so-called ELPIS. ELPIS system utilizes magnetic fields, instead of physical scatterers, for broadening the spot-size of broad-energetic beams while capable of simultaneously scanning them in lateral directions. To investigate the clinical feasibility of this gantry design, we conducted a treatment planning study with a 3D treatment planning system augmented for the pulsed beams with optimizable broad-energetic widths and selectable beam spot sizes. High quality treatment plans could be achieved with such unconventional beam parameters, deliverable via the presented gantry and ELPIS dose delivery system. The conventional PT gantries are huge and require large space for the gantry to rotate the beam around the patient, which could be reduced up to 4 times with the presented pulse powered gantry system. The further developments in the next generation petawatt laser systems and laser-targets are crucial to reach higher proton energies. However, if proton energies required for therapy applications are reached it could be possible in future to reduce the footprint of the PT facilities, without compromising on clinical standards.

  14. A light-weight compact proton gantry design with a novel dose delivery system for broad-energetic laser-accelerated beams.

    PubMed

    Masood, U; Cowan, T E; Enghardt, W; Hofmann, K M; Karsch, L; Kroll, F; Schramm, U; Wilkens, J J; Pawelke, J

    2017-07-07

    Proton beams may provide superior dose-conformity in radiation therapy. However, the large sizes and costs limit the widespread use of proton therapy (PT). The recent progress in proton acceleration via high-power laser systems has made it a compelling alternative to conventional accelerators, as it could potentially reduce the overall size and cost of the PT facilities. However, the laser-accelerated beams exhibit different characteristics than conventionally accelerated beams, i.e. very intense proton bunches with large divergences and broad-energy spectra. For the application of laser-driven beams in PT, new solutions for beam transport, such as beam capture, integrated energy selection, beam shaping and delivery systems are required due to the specific beam parameters. The generation of these beams are limited by the low repetition rate of high-power lasers and this limitation would require alternative solutions for tumour irradiation which can efficiently utilize the available high proton fluence and broad-energy spectra per proton bunch to keep treatment times short. This demands new dose delivery system and irradiation field formation schemes. In this paper, we present a multi-functional light-weight and compact proton gantry design for laser-driven sources based on iron-less pulsed high-field magnets. This achromatic design includes improved beam capturing and energy selection systems, with a novel beam shaping and dose delivery system, so-called ELPIS. ELPIS system utilizes magnetic fields, instead of physical scatterers, for broadening the spot-size of broad-energetic beams while capable of simultaneously scanning them in lateral directions. To investigate the clinical feasibility of this gantry design, we conducted a treatment planning study with a 3D treatment planning system augmented for the pulsed beams with optimizable broad-energetic widths and selectable beam spot sizes. High quality treatment plans could be achieved with such unconventional beam parameters, deliverable via the presented gantry and ELPIS dose delivery system. The conventional PT gantries are huge and require large space for the gantry to rotate the beam around the patient, which could be reduced up to 4 times with the presented pulse powered gantry system. The further developments in the next generation petawatt laser systems and laser-targets are crucial to reach higher proton energies. However, if proton energies required for therapy applications are reached it could be possible in future to reduce the footprint of the PT facilities, without compromising on clinical standards.

  15. Stop Thief!

    ERIC Educational Resources Information Center

    American School and University, 1984

    1984-01-01

    A large urban school board has solved the security problem for electronically stored student records by refusing to file them on a dialup system and by requiring a notarized affidavit of custody before release to a parent or guardian. (TE)

  16. Control and applications of cooperating disparate robotic manipulators relevant to nuclear waste management

    NASA Technical Reports Server (NTRS)

    Lew, Jae Young; Book, Wayne J.

    1991-01-01

    Remote handling in nuclear waste management requires a robotic system with precise motion as well as a large workspace. The concept of a small arm mounted on the end of a large arm may satisfy such needs. However, the performance of such a serial configuration lacks payload capacity which is a crucial factor for handling a massive object. Also, this configuration induces more flexibility on the structure. To overcome these problems, the topology of bracing the tip of the small arm (not the large arm) and having an end effector in the middle of the chain is proposed in this paper. Also, control of these cooperating disparate manipulators is accomplished in computer simulations. Thus, this robotic system can have the accuracy of the small arm, and at the same time, it can have the payload capacity and large workspace of the large arm.

  17. Arctic Boreal Vulnerability Experiment (ABoVE) Science Cloud

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Schnase, J. L.; McInerney, M.; Webster, W. P.; Sinno, S.; Thompson, J. H.; Griffith, P. C.; Hoy, E.; Carroll, M.

    2014-12-01

    The effects of climate change are being revealed at alarming rates in the Arctic and Boreal regions of the planet. NASA's Terrestrial Ecology Program has launched a major field campaign to study these effects over the next 5 to 8 years. The Arctic Boreal Vulnerability Experiment (ABoVE) will challenge scientists to take measurements in the field, study remote observations, and even run models to better understand the impacts of a rapidly changing climate for areas of Alaska and western Canada. The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center (GSFC) has partnered with the Terrestrial Ecology Program to create a science cloud designed for this field campaign - the ABoVE Science Cloud. The cloud combines traditional high performance computing with emerging technologies to create an environment specifically designed for large-scale climate analytics. The ABoVE Science Cloud utilizes (1) virtualized high-speed InfiniBand networks, (2) a combination of high-performance file systems and object storage, and (3) virtual system environments tailored for data intensive, science applications. At the center of the architecture is a large object storage environment, much like a traditional high-performance file system, that supports data proximal processing using technologies like MapReduce on a Hadoop Distributed File System (HDFS). Surrounding the storage is a cloud of high performance compute resources with many processing cores and large memory coupled to the storage through an InfiniBand network. Virtual systems can be tailored to a specific scientist and provisioned on the compute resources with extremely high-speed network connectivity to the storage and to other virtual systems. In this talk, we will present the architectural components of the science cloud and examples of how it is being used to meet the needs of the ABoVE campaign. In our experience, the science cloud approach significantly lowers the barriers and risks to organizations that require high performance computing solutions and provides the NCCS with the agility required to meet our customers' rapidly increasing and evolving requirements.

  18. The implementation and use of Ada on distributed systems with high reliability requirements

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1986-01-01

    The general inadequacy of Ada for programming systems that must survive processor loss was shown. A solution to the problem was proposed in which there are no syntatic changes to Ada. The approach was evaluated using a full-scale, realistic application. The application used was the Advanced Transport Operating System (ATOPS), an experimental computer control system developed for a modified Boeing 737 aircraft. The ATOPS system is a full authority, real-time avionics system providing a large variety of advanced features. Methods of building fault tolerance into concurrent systems were explored. A set of criteria by which the proposed method will be judged was examined. Extensive interaction with personnel from Computer Sciences Corporation and NASA Langley occurred to determine the requirements of the ATOPS software. Backward error recovery in concurrent systems was assessed.

  19. A database management capability for Ada

    NASA Technical Reports Server (NTRS)

    Chan, Arvola; Danberg, SY; Fox, Stephen; Landers, Terry; Nori, Anil; Smith, John M.

    1986-01-01

    The data requirements of mission critical defense systems have been increasing dramatically. Command and control, intelligence, logistics, and even weapons systems are being required to integrate, process, and share ever increasing volumes of information. To meet this need, systems are now being specified that incorporate data base management subsystems for handling storage and retrieval of information. It is expected that a large number of the next generation of mission critical systems will contain embedded data base management systems. Since the use of Ada has been mandated for most of these systems, it is important to address the issues of providing data base management capabilities that can be closely coupled with Ada. A comprehensive distributed data base management project has been investigated. The key deliverables of this project are three closely related prototype systems implemented in Ada. These three systems are discussed.

  20. Concave Surround Optics for Rapid Multi-View Imaging

    DTIC Science & Technology

    2006-11-01

    thus is amenable to capturing dynamic events avoiding the need to construct and calibrate an array of cameras. We demonstrate the system with a high...hard to assemble and calibrate . In this paper we present an optical system capable of rapidly moving the viewpoint around a scene. Our system...flexibility, large camera arrays are typically expensive and require significant effort to calibrate temporally, geometrically and chromatically

Top