Sample records for hardware evolving proven

  1. Hardware Evolution of Analog Speed Controllers for a DC Motor

    NASA Technical Reports Server (NTRS)

    Gwaltney, David A.; Ferguson, Michael I.

    2003-01-01

    Evolvable hardware provides the capability to evolve analog circuits to produce amplifier and filter functions. Conventional analog controller designs employ these same functions. Analog controllers for the control of the shaft speed of a DC motor are evolved on an evolvable hardware platform utilizing a Field Programmable Transistor Array (FPTA). The performance of these evolved controllers is compared to that of a conventional proportional-integral (PI) controller.

  2. Intrinsic Hardware Evolution for the Design and Reconfiguration of Analog Speed Controllers for a DC Motor

    NASA Technical Reports Server (NTRS)

    Gwaltney, David A.; Ferguson, Michael I.

    2003-01-01

    Evolvable hardware provides the capability to evolve analog circuits to produce amplifier and filter functions. Conventional analog controller designs employ these same functions. Analog controllers for the control of the shaft speed of a DC motor are evolved on an evolvable hardware platform utilizing a second generation Field Programmable Transistor Array (FPTA2). The performance of an evolved controller is compared to that of a conventional proportional-integral (PI) controller. It is shown that hardware evolution is able to create a compact design that provides good performance, while using considerably less functional electronic components than the conventional design. Additionally, the use of hardware evolution to provide fault tolerance by reconfiguring the design is explored. Experimental results are presented showing that significant recovery of capability can be made in the face of damaging induced faults.

  3. PyEvolve: a toolkit for statistical modelling of molecular evolution.

    PubMed

    Butterfield, Andrew; Vedagiri, Vivek; Lang, Edward; Lawrence, Cath; Wakefield, Matthew J; Isaev, Alexander; Huttley, Gavin A

    2004-01-05

    Examining the distribution of variation has proven an extremely profitable technique in the effort to identify sequences of biological significance. Most approaches in the field, however, evaluate only the conserved portions of sequences - ignoring the biological significance of sequence differences. A suite of sophisticated likelihood based statistical models from the field of molecular evolution provides the basis for extracting the information from the full distribution of sequence variation. The number of different problems to which phylogeny-based maximum likelihood calculations can be applied is extensive. Available software packages that can perform likelihood calculations suffer from a lack of flexibility and scalability, or employ error-prone approaches to model parameterisation. Here we describe the implementation of PyEvolve, a toolkit for the application of existing, and development of new, statistical methods for molecular evolution. We present the object architecture and design schema of PyEvolve, which includes an adaptable multi-level parallelisation schema. The approach for defining new methods is illustrated by implementing a novel dinucleotide model of substitution that includes a parameter for mutation of methylated CpG's, which required 8 lines of standard Python code to define. Benchmarking was performed using either a dinucleotide or codon substitution model applied to an alignment of BRCA1 sequences from 20 mammals, or a 10 species subset. Up to five-fold parallel performance gains over serial were recorded. Compared to leading alternative software, PyEvolve exhibited significantly better real world performance for parameter rich models with a large data set, reducing the time required for optimisation from approximately 10 days to approximately 6 hours. PyEvolve provides flexible functionality that can be used either for statistical modelling of molecular evolution, or the development of new methods in the field. The toolkit can be used interactively or by writing and executing scripts. The toolkit uses efficient processes for specifying the parameterisation of statistical models, and implements numerous optimisations that make highly parameter rich likelihood functions solvable within hours on multi-cpu hardware. PyEvolve can be readily adapted in response to changing computational demands and hardware configurations to maximise performance. PyEvolve is released under the GPL and can be downloaded from http://cbis.anu.edu.au/software.

  4. Toward Evolvable Hardware Chips: Experiments with a Programmable Transistor Array

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian

    1998-01-01

    Evolvable Hardware is reconfigurable hardware that self-configures under the control of an evolutionary algorithm. We search for a hardware configuration can be performed using software models or, faster and more accurate, directly in reconfigurable hardware. Several experiments have demonstrated the possibility to automatically synthesize both digital and analog circuits. The paper introduces an approach to automated synthesis of CMOS circuits, based on evolution on a Programmable Transistor Array (PTA). The approach is illustrated with a software experiment showing evolutionary synthesis of a circuit with a desired DC characteristic. A hardware implementation of a test PTA chip is then described, and the same evolutionary experiment is performed on the chip demonstrating circuit synthesis/self-configuration directly in hardware.

  5. Transistor Level Circuit Experiments using Evolvable Hardware

    NASA Technical Reports Server (NTRS)

    Stoica, A.; Zebulum, R. S.; Keymeulen, D.; Ferguson, M. I.; Daud, Taher; Thakoor, A.

    2005-01-01

    The Jet Propulsion Laboratory (JPL) performs research in fault tolerant, long life, and space survivable electronics for the National Aeronautics and Space Administration (NASA). With that focus, JPL has been involved in Evolvable Hardware (EHW) technology research for the past several years. We have advanced the technology not only by simulation and evolution experiments, but also by designing, fabricating, and evolving a variety of transistor-based analog and digital circuits at the chip level. EHW refers to self-configuration of electronic hardware by evolutionary/genetic search mechanisms, thereby maintaining existing functionality in the presence of degradations due to aging, temperature, and radiation. In addition, EHW has the capability to reconfigure itself for new functionality when required for mission changes or encountered opportunities. Evolution experiments are performed using a genetic algorithm running on a DSP as the reconfiguration mechanism and controlling the evolvable hardware mounted on a self-contained circuit board. Rapid reconfiguration allows convergence to circuit solutions in the order of seconds. The paper illustrates hardware evolution results of electronic circuits and their ability to perform under 230 C temperature as well as radiations of up to 250 kRad.

  6. On two new trends in evolvable hardware: employment of HDL-based structuring, and design of multi-functional circuits

    NASA Technical Reports Server (NTRS)

    Stoica, A.; Keymeulen, D.; Zebulum, R. S.; Ferguson, M. I.; Guo, X.

    2002-01-01

    This paper comments on some directions of growth for evolvable hardware, proposes research directions that address the scalability problem and gives examples of results in novel areas approached by EHW.

  7. J-2X Upper Stage Engine: Hardware and Testing 2009

    NASA Technical Reports Server (NTRS)

    Buzzell, James C.

    2009-01-01

    Mission: Common upper stage engine for Ares I and Ares V. Challenge: Use proven technology from Saturn X-33, RS-68 to develop the highest Isp GG cycle engine in history for 2 missions in record time . Key Features: LOX/LH2 GG cycle, series turbines (2), HIP-bonded MCC, pneumatic ball-sector valves, on-board engine controller, tube-wall regen nozzle/large passively-cooled nozzle extension, TEG boost/cooling . Development Philosophy: proven hardware, aggressive schedule, early risk reduction, requirements-driven.

  8. Exercise Countermeasure Hardware Evolution on ISS: The First Decade.

    PubMed

    Korth, Deborah W

    2015-12-01

    The hardware systems necessary to support exercise countermeasures to the deconditioning associated with microgravity exposure have evolved and improved significantly during the first decade of the International Space Station (ISS), resulting in both new types of hardware and enhanced performance capabilities for initial hardware items. The original suite of countermeasure hardware supported the first crews to arrive on the ISS and the improved countermeasure system delivered in later missions continues to serve the astronauts today with increased efficacy. Due to aggressive hardware development schedules and constrained budgets, the initial approach was to identify existing spaceflight-certified exercise countermeasure equipment, when available, and modify it for use on the ISS. Program management encouraged the use of commercial-off-the-shelf (COTS) hardware, or hardware previously developed (heritage hardware) for the Space Shuttle Program. However, in many cases the resultant hardware did not meet the additional requirements necessary to support crew health maintenance during long-duration missions (3 to 12 mo) and anticipated future utilization activities in support of biomedical research. Hardware development was further complicated by performance requirements that were not fully defined at the outset and tended to evolve over the course of design and fabrication. Modifications, ranging from simple to extensive, were necessary to meet these evolving requirements in each case where heritage hardware was proposed. Heritage hardware was anticipated to be inherently reliable without the need for extensive ground testing, due to its prior positive history during operational spaceflight utilization. As a result, developmental budgets were typically insufficient and schedules were too constrained to permit long-term evaluation of dedicated ground-test units ("fleet leader" type testing) to identify reliability issues when applied to long-duration use. In most cases, the exercise unit with the most operational history was the unit installed on the ISS.

  9. Development of the ISS EMU Dashboard Software

    NASA Technical Reports Server (NTRS)

    Bernard, Craig; Hill, Terry R.

    2011-01-01

    The EMU (Extra-Vehicular Mobility Unit) Dashboard was developed at NASA s Johnson Space Center to aid in real-time mission support for the ISS (International Space Station) and Shuttle EMU space suit by time synchronizing down-linked video, space suit data and audio from the mission control audio loops. Once the input streams are synchronized and recorded, the data can be replayed almost instantly and has proven invaluable in understanding in-flight hardware anomalies and playing back information conveyed by the crew to missions control and the back room support. This paper will walk through the development from an engineer s idea brought to life by an intern to real time mission support and how this tool is evolving today and its challenges to support EVAs (Extra-Vehicular Activities) and human exploration in the 21st century.

  10. Towards Evolving Electronic Circuits for Autonomous Space Applications

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Haith, Gary L.; Colombano, Silvano P.; Stassinopoulos, Dimitris

    2000-01-01

    The relatively new field of Evolvable Hardware studies how simulated evolution can reconfigure, adapt, and design hardware structures in an automated manner. Space applications, especially those requiring autonomy, are potential beneficiaries of evolvable hardware. For example, robotic drilling from a mobile platform requires high-bandwidth controller circuits that are difficult to design. In this paper, we present automated design techniques based on evolutionary search that could potentially be used in such applications. First, we present a method of automatically generating analog circuit designs using evolutionary search and a circuit construction language. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. Using a parallel genetic algorithm, we present experimental results for five design tasks. Second, we investigate the use of coevolution in automated circuit design. We examine fitness evaluation by comparing the effectiveness of four fitness schedules. The results indicate that solution quality is highest with static and co-evolving fitness schedules as compared to the other two dynamic schedules. We discuss these results and offer two possible explanations for the observed behavior: retention of useful information, and alignment of problem difficulty with circuit proficiency.

  11. From Paper to Production: An Update on NASA's Upper Stage Engine for Exploration

    NASA Technical Reports Server (NTRS)

    Kynard, Mike

    2010-01-01

    In 2006, NASA selected an evolved variant of the proven Saturn/Apollo J-2 upper stage engine to power the Ares I crew launch vehicle upper stage and the Ares V cargo launch vehicle Earth departure stage (EDS) for the Constellation Program. Any design changes needed by the new engine would be based where possible on proven hardware from the Space Shuttle, commercial launchers, and other programs. In addition to the thrust and efficiency requirements needed for the Constellation reference missions, it would be an order of magnitude safer than past engines. It required the J-2X government/industry team to develop the highest performance engine of its type in history and develop it for use in two vehicles for two different missions. In the attempt to achieve these goals in the past five years, the Upper Stage Engine team has made significant progress, successfully passing System Requirements Review (SRR), System Design Review (SDR), Preliminary Design Review (PDR), and Critical Design Review (CDR). As of spring 2010, more than 100,000 experimental and development engine parts have been completed or are in various stages of manufacture. Approximately 1,300 of more than 1,600 engine drawings have been released for manufacturing. This progress has been due to a combination of factors: the heritage hardware starting point, advanced computer analysis, and early heritage and development component testing to understand performance, validate computer modeling, and inform design trades. This work will increase the odds of success as engine team prepares for powerpack and development engine hot fire testing in calendar 2011. This paper will provide an overview of the engine development program and progress to date.

  12. EHW Approach to Temperature Compensation of Electronics

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian

    2004-01-01

    Efforts are under way to apply the concept of evolvable hardware (EHW) to compensate for variations, with temperature, in the operational characteristics of electronic circuits. To maintain the required functionality of a given circuit at a temperature above or below the nominal operating temperature for which the circuit was originally designed, a new circuit would be evolved; moreover, to obtain the required functionality over a very wide temperature range, there would be evolved a number of circuits, each of which would satisfy the performance requirements over a small part of the total temperature range. The basic concepts and some specific implementations of EHW were described in a number of previous NASA Tech Briefs articles, namely, "Reconfigurable Arrays of Transistors for Evolvable Hardware" (NPO-20078), Vol. 25, No. 2 (February 2001), page 36; Evolutionary Automated Synthesis of Electronic Circuits (NPO- 20535), Vol. 26, No. 7 (July 2002), page 37; "Designing Reconfigurable Antennas Through Hardware Evolution" (NPO-20666), Vol. 26, No. 7 (July 2002), page 38; "Morphing in Evolutionary Synthesis of Electronic Circuits" (NPO-20837), Vol. 26, No. 8 (August 2002), page 31; "Mixtrinsic Evolutionary Synthesis of Electronic Circuits" (NPO-20773) Vol. 26, No. 8 (August 2002), page 32; and "Synthesis of Fuzzy-Logic Circuits in Evolvable Hardware" (NPO-21095) Vol. 26, No. 11 (November 2002), page 38. To recapitulate from the cited prior articles: EHW is characterized as evolutionary in a quasi-genetic sense. The essence of EHW is to construct and test a sequence of populations of circuits that function as incrementally better solutions of a given design problem through the selective, repetitive connection and/or disconnection of capacitors, transistors, amplifiers, inverters, and/or other circuit building blocks. The connection and disconnection can be effected by use of field-programmable transistor arrays (FPTAs). The evolution is guided by a search-andoptimization algorithm (in particular, a genetic algorithm) that operates in the space of possible circuits to find a circuit that exhibits an acceptably close approximation of the desired functionality. The evolved circuits can be tested by mathematical modeling (that is, computational simulation) only, tested in real hardware, or tested in combinations of computational simulation and real hardware.

  13. Application Of Interferometry To Optical Components And Systems Evaluation

    NASA Astrophysics Data System (ADS)

    Houston, Joseph B., Jr.

    1982-05-01

    Interferometry provides opticians and lens designers with the ability to evaluate optical components and systems quantitatively. A variety of interferometers and interferometric test procedures have evolved over the past several decades. This evolution has stimulated an ever-increasing amount of interest in using a new generation of instrumentation and computer software for solving cost and schedule problems both in the shop and at field test sites. Optical engineers and their customers continue to gain confidence in their abilities to perform several operations such as assure component quality, analyze and optimize lens assemblies, and accurately predict end-item performance. In this paper, a set of typical test situations are addressed and some standard instrumentation is described, as a means of illustrating the special advantages of interferometric testing. Emphasis will be placed on the proper application of currently available hardware and some of the latest proven techniques.

  14. Hardware Evolution of Control Electronics

    NASA Technical Reports Server (NTRS)

    Gwaltney, David; Steincamp, Jim; Corder, Eric; King, Ken; Ferguson, M. I.; Dutton, Ken

    2003-01-01

    The evolution of closed-loop motor speed controllers implemented on the JPL FPTA2 is presented. The response of evolved controller to sinusoidal commands, controller reconfiguration for fault tolerance,and hardware evolution are described.

  15. Speed challenge: a case for hardware implementation in soft-computing

    NASA Technical Reports Server (NTRS)

    Daud, T.; Stoica, A.; Duong, T.; Keymeulen, D.; Zebulum, R.; Thomas, T.; Thakoor, A.

    2000-01-01

    For over a decade, JPL has been actively involved in soft computing research on theory, architecture, applications, and electronics hardware. The driving force in all our research activities, in addition to the potential enabling technology promise, has been creation of a niche that imparts orders of magnitude speed advantage by implementation in parallel processing hardware with algorithms made especially suitable for hardware implementation. We review our work on neural networks, fuzzy logic, and evolvable hardware with selected application examples requiring real time response capabilities.

  16. Standardization and program effect analysis (Study 2.4). Volume 2: Equipment commonality analysis. [cost savings of using flight-proven components in designing spacecraft

    NASA Technical Reports Server (NTRS)

    Shiokari, T.

    1975-01-01

    The feasibility and cost savings of using flight-proven components in designing spacecraft were investigated. The components analyzed were (1) large space telescope, (2) stratospheric aerosol and gas equipment, (3) mapping mission, (4) solar maximum mission, and (5) Tiros-N. It is concluded that flight-proven hardware can be used with not-too-extensive modification, and significant savings can be realized. The cost savings for each component are presented.

  17. EHWPACK: An evolvable hardware environment using the SPICE simulator and the Field Programmable Transistor Array

    NASA Technical Reports Server (NTRS)

    Keymeulen, D.; Klimeck, G.; Zebulum, R.; Stoica, A.; Jin, Y.; Lazaro, C.

    2000-01-01

    This paper describes the EHW development system, a tool that performs the evolutionary synthesis of electronic circuits, using the SPICE simulator and the Field Programmable Transistor Array hardware (FPTA) developed at JPL.

  18. Using Innovative Technologies for Manufacturing and Evaluating Rocket Engine Hardware

    NASA Technical Reports Server (NTRS)

    Betts, Erin M.; Hardin, Andy

    2011-01-01

    Many of the manufacturing and evaluation techniques that are currently used for rocket engine component production are traditional methods that have been proven through years of experience and historical precedence. As we enter into a new space age where new launch vehicles are being designed and propulsion systems are being improved upon, it is sometimes necessary to adopt new and innovative techniques for manufacturing and evaluating hardware. With a heavy emphasis on cost reduction and improvements in manufacturing time, manufacturing techniques such as Direct Metal Laser Sintering (DMLS) and white light scanning are being adopted and evaluated for their use on J-2X, with hopes of employing both technologies on a wide variety of future projects. DMLS has the potential to significantly reduce the processing time and cost of engine hardware, while achieving desirable material properties by using a layered powdered metal manufacturing process in order to produce complex part geometries. The white light technique is a non-invasive method that can be used to inspect for geometric feature alignment. Both the DMLS manufacturing method and the white light scanning technique have proven to be viable options for manufacturing and evaluating rocket engine hardware, and further development and use of these techniques is recommended.

  19. ACES: An Enabling Technology for Next Generation Space Transportation

    NASA Astrophysics Data System (ADS)

    Crocker, Andrew M.; Wuerl, Adam M.; Andrews, Jason E.; Andrews, Dana G.

    2004-02-01

    Andrews Space has developed the ``Alchemist'' Air Collection and Enrichment System (ACES), a dual-mode propulsion system that enables safe, economical launch systems that take off and land horizontally. Alchemist generates liquid oxygen through separation of atmospheric air using the refrigeration capacity of liquid hydrogen. The key benefit of Alchemist is that it minimizes vehicle takeoff weight. All internal and NASA-funded activities have shown that ACES, previously proposed for hypersonic combined cycle RLVs, is a higher payoff, lower-risk technology if LOX generation is performed while the vehicle cruises subsonically. Andrews Space has developed the Alchemist concept from a small system study to viable Next Generation launch system technology, conducting not only feasibility studies but also related hardware tests, and it has planned a detailed risk reduction program which employs an experienced, proven contractor team. Andrews also has participated in preliminary studies of an evolvable Next Generation vehicle architecture-enabled by Alchemist ACES-which could meet civil, military, and commercial space requirements within two decades.

  20. Hardware Evolution of Closed-Loop Controller Designs

    NASA Technical Reports Server (NTRS)

    Gwaltney, David; Ferguson, Ian

    2002-01-01

    Poster presentation will outline on-going efforts at NASA, MSFC to employ various Evolvable Hardware experimental platforms in the evolution of digital and analog circuitry for application to automatic control. Included will be information concerning the application of commercially available hardware and software along with the use of the JPL developed FPTA2 integrated circuit and supporting JPL developed software. Results to date will be presented.

  1. Waste Collector System Technology Comparisons for Constellation Applications

    NASA Technical Reports Server (NTRS)

    Broyan, James Lee, Jr.

    2006-01-01

    The Waste Collection Systems (WCS) for space vehicles have utilized a variety of hardware for collecting human metabolic wastes. It has typically required multiple missions to resolve crew usability and hardware performance issues that are difficult to duplicate on the ground. New space vehicles should leverage off past WCS systems. Past WCS hardware designs are substantially different and unique for each vehicle. However, each WCS can be analyzed and compared as a subset of technologies which encompass fecal collection, urine collection, air systems, pretreatment systems. Technology components from the WCS of various vehicles can then be combined to reduce hardware mass and volume while maximizing use of previous technology and proven human-equipment interfaces. Analysis of past US and Russian WCS are compared and extrapolated to Constellation missions.

  2. Development of a speech autocuer

    NASA Astrophysics Data System (ADS)

    Bedles, R. L.; Kizakvich, P. N.; Lawson, D. T.; McCartney, M. L.

    1980-12-01

    A wearable, visually based prosthesis for the deaf based upon the proven method for removing lipreading ambiguity known as cued speech was fabricated and tested. Both software and hardware developments are described, including a microcomputer, display, and speech preprocessor.

  3. Development of a speech autocuer

    NASA Technical Reports Server (NTRS)

    Bedles, R. L.; Kizakvich, P. N.; Lawson, D. T.; Mccartney, M. L.

    1980-01-01

    A wearable, visually based prosthesis for the deaf based upon the proven method for removing lipreading ambiguity known as cued speech was fabricated and tested. Both software and hardware developments are described, including a microcomputer, display, and speech preprocessor.

  4. Preliminary Structural Design Using Topology Optimization with a Comparison of Results from Gradient and Genetic Algorithm Methods

    NASA Technical Reports Server (NTRS)

    Burt, Adam O.; Tinker, Michael L.

    2014-01-01

    In this paper, genetic algorithm based and gradient-based topology optimization is presented in application to a real hardware design problem. Preliminary design of a planetary lander mockup structure is accomplished using these methods that prove to provide major weight savings by addressing the structural efficiency during the design cycle. This paper presents two alternative formulations of the topology optimization problem. The first is the widely-used gradient-based implementation using commercially available algorithms. The second is formulated using genetic algorithms and internally developed capabilities. These two approaches are applied to a practical design problem for hardware that has been built, tested and proven to be functional. Both formulations converged on similar solutions and therefore were proven to be equally valid implementations of the process. This paper discusses both of these formulations at a high level.

  5. CMS Analysis School Model

    NASA Astrophysics Data System (ADS)

    Malik, S.; Shipsey, I.; Cavanaugh, R.; Bloom, K.; Chan, Kai-Feng; D'Hondt, J.; Klima, B.; Narain, M.; Palla, F.; Rolandi, G.; Schörner-Sadenius, T.

    2014-06-01

    To impart hands-on training in physics analysis, CMS experiment initiated the concept of CMS Data Analysis School (CMSDAS). It was born over three years ago at the LPC (LHC Physics Centre), Fermilab and is based on earlier workshops held at the LPC and CLEO Experiment. As CMS transitioned from construction to the data taking mode, the nature of earlier training also evolved to include more of analysis tools, software tutorials and physics analysis. This effort epitomized as CMSDAS has proven to be a key for the new and young physicists to jump start and contribute to the physics goals of CMS by looking for new physics with the collision data. With over 400 physicists trained in six CMSDAS around the globe, CMS is trying to engage the collaboration in its discovery potential and maximize physics output. As a bigger goal, CMS is striving to nurture and increase engagement of the myriad talents, in the development of physics, service, upgrade, education of those new to CMS and the career development of younger members. An extension of the concept to the dedicated software and hardware schools is also planned, keeping in mind the ensuing upgrade phase.

  6. Transformation of OODT CAS to Perform Larger Tasks

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris; Freeborn, Dana; Crichton, Daniel; Hughes, John; Ramirez, Paul; Hardman, Sean; Woollard, David; Kelly, Sean

    2008-01-01

    A computer program denoted OODT CAS has been transformed to enable performance of larger tasks that involve greatly increased data volumes and increasingly intensive processing of data on heterogeneous, geographically dispersed computers. Prior to the transformation, OODT CAS (also alternatively denoted, simply, 'CAS') [wherein 'OODT' signifies 'Object-Oriented Data Technology' and 'CAS' signifies 'Catalog and Archive Service'] was a proven software component used to manage scientific data from spaceflight missions. In the transformation, CAS was split into two separate components representing its canonical capabilities: file management and workflow management. In addition, CAS was augmented by addition of a resource-management component. This third component enables CAS to manage heterogeneous computing by use of diverse resources, including high-performance clusters of computers, commodity computing hardware, and grid computing infrastructures. CAS is now more easily maintainable, evolvable, and reusable. These components can be used separately or, taking advantage of synergies, can be used together. Other elements of the transformation included addition of a separate Web presentation layer that supports distribution of data products via Really Simple Syndication (RSS) feeds, and provision for full Resource Description Framework (RDF) exports of metadata.

  7. DAME: planetary-prototype drilling automation.

    PubMed

    Glass, B; Cannon, H; Branson, M; Hanagud, S; Paulsen, G

    2008-06-01

    We describe results from the Drilling Automation for Mars Exploration (DAME) project, including those of the summer 2006 tests from an Arctic analog site. The drill hardware is a hardened, evolved version of the Advanced Deep Drill by Honeybee Robotics. DAME has developed diagnostic and executive software for hands-off surface operations of the evolved version of this drill. The DAME drill automation tested from 2004 through 2006 included adaptively controlled drilling operations and the downhole diagnosis of drilling faults. It also included dynamic recovery capabilities when unexpected failures or drilling conditions were discovered. DAME has developed and tested drill automation software and hardware under stressful operating conditions during its Arctic field testing campaigns at a Mars analog site.

  8. DAME: Planetary-Prototype Drilling Automation

    NASA Astrophysics Data System (ADS)

    Glass, B.; Cannon, H.; Branson, M.; Hanagud, S.; Paulsen, G.

    2008-06-01

    We describe results from the Drilling Automation for Mars Exploration (DAME) project, including those of the summer 2006 tests from an Arctic analog site. The drill hardware is a hardened, evolved version of the Advanced Deep Drill by Honeybee Robotics. DAME has developed diagnostic and executive software for hands-off surface operations of the evolved version of this drill. The DAME drill automation tested from 2004 through 2006 included adaptively controlled drilling operations and the downhole diagnosis of drilling faults. It also included dynamic recovery capabilities when unexpected failures or drilling conditions were discovered. DAME has developed and tested drill automation software and hardware under stressful operating conditions during its Arctic field testing campaigns at a Mars analog site.

  9. The Extravehicular Mobility Unit (EMU): Proven hardware for Satellite Servicing

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A general technical description of the extravehicular mobility unit (EMU) is given. The description provides a basis for understanding EMU mobility capabilities and the environments a payload is exposed to in the vicinity of an EMU.

  10. Evolutionary online behaviour learning and adaptation in real robots.

    PubMed

    Silva, Fernando; Correia, Luís; Christensen, Anders Lyhne

    2017-07-01

    Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm.

  11. Stretched Lens Array (SLA) Photovoltaic Concentrator Hardware Development and Testing

    NASA Technical Reports Server (NTRS)

    Piszczor, Michael; O'Neill, Mark J.; Eskenazi, Michael

    2003-01-01

    Over the past two years, the Stretched Lens Array (SLA) photovoltaic concentrator has evolved, under a NASA contract, from a concept with small component demonstrators to operational array hardware that is ready for space validation testing. A fully-functional four panel SLA solar array has been designed, built and tested. This paper will summarize the focus of the hardware development effort, discuss the results of recent testing conducted under this program and present the expected performance of a full size 7kW array designed to meet the requirements of future space missions.

  12. Mechanically verified hardware implementing an 8-bit parallel IO Byzantine agreement processor

    NASA Technical Reports Server (NTRS)

    Moore, J. Strother

    1992-01-01

    Consider a network of four processors that use the Oral Messages (Byzantine Generals) Algorithm of Pease, Shostak, and Lamport to achieve agreement in the presence of faults. Bevier and Young have published a functional description of a single processor that, when interconnected appropriately with three identical others, implements this network under the assumption that the four processors step in synchrony. By formalizing the original Pease, et al work, Bevier and Young mechanically proved that such a network achieves fault tolerance. We develop, formalize, and discuss a hardware design that has been mechanically proven to implement their processor. In particular, we formally define mapping functions from the abstract state space of the Bevier-Young processor to a concrete state space of a hardware module and state a theorem that expresses the claim that the hardware correctly implements the processor. We briefly discuss the Brock-Hunt Formal Hardware Description Language which permits designs both to be proved correct with the Boyer-Moore theorem prover and to be expressed in a commercially supported hardware description language for additional electrical analysis and layout. We briefly describe our implementation.

  13. Power Hardware-in-the-Loop (PHIL) Testing Facility for Distributed Energy Storage (Poster)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neubauer.J.; Lundstrom, B.; Simpson, M.

    2014-06-01

    The growing deployment of distributed, variable generation and evolving end-user load profiles presents a unique set of challenges to grid operators responsible for providing reliable and high quality electrical service. Mass deployment of distributed energy storage systems (DESS) has the potential to solve many of the associated integration issues while offering reliability and energy security benefits other solutions cannot. However, tools to develop, optimize, and validate DESS control strategies and hardware are in short supply. To fill this gap, NREL has constructed a power hardware-in-the-loop (PHIL) test facility that connects DESS, grid simulator, and load bank hardware to a distributionmore » feeder simulation.« less

  14. A comparison of hardware description languages. [describing digital systems structure and behavior to a computer

    NASA Technical Reports Server (NTRS)

    Shiva, S. G.

    1978-01-01

    Several high level languages which evolved over the past few years for describing and simulating the structure and behavior of digital systems, on digital computers are assessed. The characteristics of the four prominent languages (CDL, DDL, AHPL, ISP) are summarized. A criterion for selecting a suitable hardware description language for use in an automatic integrated circuit design environment is provided.

  15. Department of Defense Computer Technology. A Report to Congress.

    DTIC Science & Technology

    1983-08-01

    system and evolves his employment tactics. (8) Lack of adequate competition. Conclusions Based on both software and hardware arguments, it is...environments, Services should be encouraged to use either common-commercial, ruggedized-commercial or "off-the-shelf" militarized computers based upon...the performance requirements of the specific application. Full consideration should be given to Ada- based systems where there is no strict hardware

  16. Rapid evolution of analog circuits configured on a field programmable transistor array

    NASA Technical Reports Server (NTRS)

    Stoica, A.; Ferguson, M. I.; Zebulum, R. S.; Keymeulen, D.; Duong, V.; Daud, T.

    2002-01-01

    The purpose of this paper is to illustrate evolution of analog circuits on a stand-alone board-level evolvable system (SABLES). SABLES is part of an effort to achieve integrated evolvable systems. SABLES provides autonomous, fast (tens to hundreds of seconds), on-chip circuit evolution involving about 100,000 circuit evaluations. Its main components are a JPL Field Programmable Transistor Array (FPTA) chip used as transistor-level reconfigurable hardware, and a TI DSP that implements the evolutionary algorithm controlling the FPTA reconfiguration. The paper details an example of evolution on SABLES and points out to certain transient and memory effects that affect the stability of solutions obtained reusing the same piece of hardware for rapid testing of individuals during evolution.

  17. Innovations in dynamic test restraint systems

    NASA Technical Reports Server (NTRS)

    Fuld, Christopher J.

    1990-01-01

    Recent launch system development programs have led to a new generation of large scale dynamic tests. The variety of test scenarios share one common requirement: restrain and capture massive high velocity flight hardware with no structural damage. The Space Systems Lab of McDonnell Douglas developed a remarkably simple and cost effective approach to such testing using ripstitch energy absorbers adapted from the sport of technical rockclimbing. The proven system reliability of the capture system concept has led to a wide variety of applications in test system design and in aerospace hardware design.

  18. Dynamically allocating sets of fine-grained processors to running computations

    NASA Technical Reports Server (NTRS)

    Middleton, David

    1988-01-01

    Researchers explore an approach to using general purpose parallel computers which involves mapping hardware resources onto computations instead of mapping computations onto hardware. Problems such as processor allocation, task scheduling and load balancing, which have traditionally proven to be challenging, change significantly under this approach and may become amenable to new attacks. Researchers describe the implementation of this approach used by the FFP Machine whose computation and communication resources are repeatedly partitioned into disjoint groups that match the needs of available tasks from moment to moment. Several consequences of this system are examined.

  19. Flight-Proven Nano-Satellite Architecture for Hands-On Academic Training at the US Air Force Academy

    NASA Astrophysics Data System (ADS)

    Underwood, Craig I.; Sellers, Lt. Jerry, , Col.; Sweeting, Martin, , Sir

    2002-01-01

    This paper describes the use of "commercial-off-the-shelf" open-architecture satellite sub-systems, based on the flight- proven "SNAP" nanosatellite platform, to provide "hands-on" education and training at the United States Air Force Academy. The UK's first nanosatellite: SNAP-1, designed and built by Surrey Satellite Technology Ltd. (SSTL) and Surrey Space Centre staff - in less than a year - was launched in June 2000. The 6.5 kg spacecraft carries advanced, UK-developed, GPS navigation, computing, propulsion and attitude control technologies, which have been used to demonstrate orbital manoeuvring and full three-axis controlled body stabilisation. SNAP-1's primary payload is a machine vision system which has been used to image the in-orbit deployment of another SSTL-built spacecraft: Tsinghua-1. The highly successful, SNAP-1 mission has also demonstrated how the concept of using a standardised, modular nanosatellite bus can provide the core support units (power system, on-board data-handling and communications systems and standardised payload interface) for a practical nanosatellite to be constructed and flown in a remarkably short time-frame. Surrey's undergraduate and post-graduate students have made a major input to the SNAP concept over the last six years in the context of project work within the Space Centre. Currently, students at the USAF Academy are benefiting from this technology in the context of designing their own nanosatellite - FalconSAT-2. For the FalconSAT-2 project, the approach has been to focus on building up infrastructure, including design and development tools that can serve as a firm foundation to allow the satellite design to evolve steadily over the course of several missions. Specific to this new approach has been a major effort to bound the problem faced by the students. To do this, the program has leveraged the research carried out at the Surrey Space Centre, by "buying into" the SNAP architecture. Through this, the Academy program has achieved an "out of the box" solution for several critical subsystems; including power, communications and, most important, data handling. Using one set of SNAP hardware, the FalconSAT Avionics Simulation Testbed (FAST) was established in Fall 2000. FAST provides both a long-term facility for cadets to gain hands-on experience with spacecraft hardware and software, as well as overall program risk reduction by providing a facility for subsystem, software, and operational procedures development and testing. In addition, over the last two years, USAF cadets have been seconded to Surrey to help develop a MATLAB-based spacecraft simulator for SNAP, which itself is becoming a useful educational tool. While the use of the SNAP hardware has eased spacecraft design problem in many respects, considerable effort still remains in the areas of payload design and development, structures, attitude control, thermal control, solar panels, testing and operations -- more than enough to challenge even the most ambitious undergraduate students. This paper reviews our experience, both in the UK and in the US, in using a flight-proven nanosatellite in an educational context.

  20. Design and Evolution of a Modular Tensegrity Robot Platform

    NASA Technical Reports Server (NTRS)

    Bruce, Jonathan; Caluwaerts, Ken; Iscen, Atil; Sabelhaus, Andrew P.; SunSpiral, Vytas

    2014-01-01

    NASA Ames Research Center is developing a compliant modular tensegrity robotic platform for planetary exploration. In this paper we present the design and evolution of the platform's main hardware component, an untethered, robust tensegrity strut, with rich sensor feedback and cable actuation. Each strut is a complete robot, and multiple struts can be combined together to form a wide range of complex tensegrity robots. Our current goal for the tensegrity robotic platform is the development of SUPERball, a 6-strut icosahedron underactuated tensegrity robot aimed at dynamic locomotion for planetary exploration rovers and landers, but the aim is for the modular strut to enable a wide range of tensegrity morphologies. SUPERball is a second generation prototype, evolving from the tensegrity robot ReCTeR, which is also a modular, lightweight, highly compliant 6-strut tensegrity robot that was used to validate our physics based NASA Tensegrity Robot Toolkit (NTRT) simulator. Many hardware design parameters of the SUPERball were driven by locomotion results obtained in our validated simulator. These evolutionary explorations helped constrain motor torque and speed parameters, along with strut and string stress. As construction of the hardware has finalized, we have also used the same evolutionary framework to evolve controllers that respect the built hardware parameters.

  1. Evolutionary online behaviour learning and adaptation in real robots

    PubMed Central

    Correia, Luís; Christensen, Anders Lyhne

    2017-01-01

    Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm. PMID:28791130

  2. Using Innovative Technologies for Manufacturing Rocket Engine Hardware

    NASA Technical Reports Server (NTRS)

    Betts, E. M.; Eddleman, D. E.; Reynolds, D. C.; Hardin, N. A.

    2011-01-01

    Many of the manufacturing techniques that are currently used for rocket engine component production are traditional methods that have been proven through years of experience and historical precedence. As the United States enters into the next space age where new launch vehicles are being designed and propulsion systems are being improved upon, it is sometimes necessary to adopt innovative techniques for manufacturing hardware. With a heavy emphasis on cost reduction and improvements in manufacturing time, rapid manufacturing techniques such as Direct Metal Laser Sintering (DMLS) are being adopted and evaluated for their use on NASA s Space Launch System (SLS) upper stage engine, J-2X, with hopes of employing this technology on a wide variety of future projects. DMLS has the potential to significantly reduce the processing time and cost of engine hardware, while achieving desirable material properties by using a layered powder metal manufacturing process in order to produce complex part geometries. Marshall Space Flight Center (MSFC) has recently hot-fire tested a J-2X gas generator (GG) discharge duct that was manufactured using DMLS. The duct was inspected and proof tested prior to the hot-fire test. Using a workhorse gas generator (WHGG) test fixture at MSFC's East Test Area, the duct was subjected to extreme J-2X hot gas environments during 7 tests for a total of 537 seconds of hot-fire time. The duct underwent extensive post-test evaluation and showed no signs of degradation. DMLS manufacturing has proven to be a viable option for manufacturing rocket engine hardware, and further development and use of this manufacturing method is recommended.

  3. Using Innovative Techniques for Manufacturing Rocket Engine Hardware

    NASA Technical Reports Server (NTRS)

    Betts, Erin M.; Reynolds, David C.; Eddleman, David E.; Hardin, Andy

    2011-01-01

    Many of the manufacturing techniques that are currently used for rocket engine component production are traditional methods that have been proven through years of experience and historical precedence. As we enter into a new space age where new launch vehicles are being designed and propulsion systems are being improved upon, it is sometimes necessary to adopt new and innovative techniques for manufacturing hardware. With a heavy emphasis on cost reduction and improvements in manufacturing time, manufacturing techniques such as Direct Metal Laser Sintering (DMLS) are being adopted and evaluated for their use on J-2X, with hopes of employing this technology on a wide variety of future projects. DMLS has the potential to significantly reduce the processing time and cost of engine hardware, while achieving desirable material properties by using a layered powder metal manufacturing process in order to produce complex part geometries. Marshall Space Flight Center (MSFC) has recently hot-fire tested a J-2X gas generator discharge duct that was manufactured using DMLS. The duct was inspected and proof tested prior to the hot-fire test. Using the Workhorse Gas Generator (WHGG) test setup at MSFC?s East Test Area test stand 116, the duct was subject to extreme J-2X gas generator environments and endured a total of 538 seconds of hot-fire time. The duct survived the testing and was inspected after the test. DMLS manufacturing has proven to be a viable option for manufacturing rocket engine hardware, and further development and use of this manufacturing method is recommended.

  4. Energy Efficient Engine combustor test hardware detailed design report

    NASA Technical Reports Server (NTRS)

    Burrus, D. L.; Chahrour, C. A.; Foltz, H. L.; Sabla, P. E.; Seto, S. P.; Taylor, J. R.

    1984-01-01

    The Energy Efficient Engine (E3) Combustor Development effort was conducted as part of the overall NASA/GE E3 Program. This effort included the selection of an advanced double-annular combustion system design. The primary intent was to evolve a design which meets the stringent emissions and life goals of the E3 as well as all of the usual performance requirements of combustion systems for modern turbofan engines. Numerous detailed design studies were conducted to define the features of the combustion system design. Development test hardware was fabricated, and an extensive testing effort was undertaken to evaluate the combustion system subcomponents in order to verify and refine the design. Technology derived from this development effort will be incorporated into the engine combustion system hardware design. This advanced engine combustion system will then be evaluated in component testing to verify the design intent. What is evolving from this development effort is an advanced combustion system capable of satisfying all of the combustion system design objectives and requirements of the E3. Fuel nozzle, diffuser, starting, and emissions design studies are discussed.

  5. Evolvable Smartphone-Based Platforms for Point-of-Care In-Vitro Diagnostics Applications.

    PubMed

    Patou, François; AlZahra'a Alatraktchi, Fatima; Kjægaard, Claus; Dimaki, Maria; Madsen, Jan; Svendsen, Winnie E

    2016-09-03

    The association of smart mobile devices and lab-on-chip technologies offers unprecedented opportunities for the emergence of direct-to-consumer in vitro medical diagnostics applications. Despite their clear transformative potential, obstacles remain to the large-scale disruption and long-lasting success of these systems in the consumer market. For instance, the increasing level of complexity of instrumented lab-on-chip devices, coupled to the sporadic nature of point-of-care testing, threatens the viability of a business model mainly relying on disposable/consumable lab-on-chips. We argued recently that system evolvability, defined as the design characteristic that facilitates more manageable transitions between system generations via the modification of an inherited design, can help remedy these limitations. In this paper, we discuss how platform-based design can constitute a formal entry point to the design and implementation of evolvable smart device/lab-on-chip systems. We present both a hardware/software design framework and the implementation details of a platform prototype enabling at this stage the interfacing of several lab-on-chip variants relying on current- or impedance-based biosensors. Our findings suggest that several change-enabling mechanisms implemented in the higher abstraction software layers of the system can promote evolvability, together with the design of change-absorbing hardware/software interfaces. Our platform architecture is based on a mobile software application programming interface coupled to a modular hardware accessory. It allows the specification of lab-on-chip operation and post-analytic functions at the mobile software layer. We demonstrate its potential by operating a simple lab-on-chip to carry out the detection of dopamine using various electroanalytical methods.

  6. Evolvable Smartphone-Based Platforms for Point-of-Care In-Vitro Diagnostics Applications

    PubMed Central

    Patou, François; AlZahra’a Alatraktchi, Fatima; Kjægaard, Claus; Dimaki, Maria; Madsen, Jan; Svendsen, Winnie E.

    2016-01-01

    The association of smart mobile devices and lab-on-chip technologies offers unprecedented opportunities for the emergence of direct-to-consumer in vitro medical diagnostics applications. Despite their clear transformative potential, obstacles remain to the large-scale disruption and long-lasting success of these systems in the consumer market. For instance, the increasing level of complexity of instrumented lab-on-chip devices, coupled to the sporadic nature of point-of-care testing, threatens the viability of a business model mainly relying on disposable/consumable lab-on-chips. We argued recently that system evolvability, defined as the design characteristic that facilitates more manageable transitions between system generations via the modification of an inherited design, can help remedy these limitations. In this paper, we discuss how platform-based design can constitute a formal entry point to the design and implementation of evolvable smart device/lab-on-chip systems. We present both a hardware/software design framework and the implementation details of a platform prototype enabling at this stage the interfacing of several lab-on-chip variants relying on current- or impedance-based biosensors. Our findings suggest that several change-enabling mechanisms implemented in the higher abstraction software layers of the system can promote evolvability, together with the design of change-absorbing hardware/software interfaces. Our platform architecture is based on a mobile software application programming interface coupled to a modular hardware accessory. It allows the specification of lab-on-chip operation and post-analytic functions at the mobile software layer. We demonstrate its potential by operating a simple lab-on-chip to carry out the detection of dopamine using various electroanalytical methods. PMID:27598208

  7. Fracture of fusion mass after hardware removal in patients with high sagittal imbalance.

    PubMed

    Sedney, Cara L; Daffner, Scott D; Stefanko, Jared J; Abdelfattah, Hesham; Emery, Sanford E; France, John C

    2016-04-01

    As spinal fusions become more common and more complex, so do the sequelae of these procedures, some of which remain poorly understood. The authors report on a series of patients who underwent removal of hardware after CT-proven solid fusion, confirmed by intraoperative findings. These patients later developed a spontaneous fracture of the fusion mass that was not associated with trauma. A series of such patients has not previously been described in the literature. An unfunded, retrospective review of the surgical logs of 3 fellowship-trained spine surgeons yielded 7 patients who suffered a fracture of a fusion mass after hardware removal. Adult patients from the West Virginia University Department of Orthopaedics who underwent hardware removal in the setting of adjacent-segment disease (ASD), and subsequently experienced fracture of the fusion mass through the uninstrumented segment, were studied. The medical records and radiological studies of these patients were examined for patient demographics and comorbidities, initial indication for surgery, total number of surgeries, timeline of fracture occurrence, risk factors for fracture, as well as sagittal imbalance. All 7 patients underwent hardware removal in conjunction with an extension of fusion for ASD. All had CT-proven solid fusion of their previously fused segments, which was confirmed intraoperatively. All patients had previously undergone multiple operations for a variety of indications, 4 patients were smokers, and 3 patients had osteoporosis. Spontaneous fracture of the fusion mass occurred in all patients and was not due to trauma. These fractures occurred 4 months to 4 years after hardware removal. All patients had significant sagittal imbalance of 13-15 cm. The fracture level was L-5 in 6 of the 7 patients, which was the first uninstrumented level caudal to the newly placed hardware in all 6 of these patients. Six patients underwent surgery due to this fracture. The authors present a case series of 7 patients who underwent surgery for ASD after a remote fusion. These patients later developed a fracture of the fusion mass after hardware removal from their previously successfully fused segment. All patients had a high sagittal imbalance and had previously undergone multiple spinal operations. The development of a spontaneous fracture of the fusion mass may be related to sagittal imbalance. Consideration should be given to reimplanting hardware for these patients, even across good fusions, to prevent spontaneous fracture of these areas if the sagittal imbalance is not corrected.

  8. CMS Analysis School Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malik, S.; Shipsey, I.; Cavanaugh, R.

    To impart hands-on training in physics analysis, CMS experiment initiated the concept of CMS Data Analysis School (CMSDAS). It was born over three years ago at the LPC (LHC Physics Centre), Fermilab and is based on earlier workshops held at the LPC and CLEO Experiment. As CMS transitioned from construction to the data taking mode, the nature of earlier training also evolved to include more of analysis tools, software tutorials and physics analysis. This effort epitomized as CMSDAS has proven to be a key for the new and young physicists to jump start and contribute to the physics goals ofmore » CMS by looking for new physics with the collision data. With over 400 physicists trained in six CMSDAS around the globe, CMS is trying to engage the collaboration in its discovery potential and maximize physics output. As a bigger goal, CMS is striving to nurture and increase engagement of the myriad talents, in the development of physics, service, upgrade, education of those new to CMS and the career development of younger members. An extension of the concept to the dedicated software and hardware schools is also planned, keeping in mind the ensuing upgrade phase.« less

  9. Object-Oriented Dynamic Bayesian Network-Templates for Modelling Mechatronic Systems

    DTIC Science & Technology

    2002-05-04

    daimlerchrysler.com Abstract are widespread. For modelling mechanical systems The object-oriented paradigma is a new but proven technol- ADAMS [31 or...hardware (sub-)systems. On the Software side thermal flow or hydraulics, see Figure 1. It also contains a the object-oriented paradigma is by now (at

  10. From Paper to Production to Test: An Update on NASA's J-2X Engine for Exploration

    NASA Technical Reports Server (NTRS)

    Kynard, Michael

    2011-01-01

    The NASA/industry team responsible for developing the J-2X upper stage engine for the Space Launch System (SLS) Program has made significant progress toward moving beyond the design phase and into production, assembly, and test of development hardware. The J-2X engine exemplifies the SLS Program goal of using proven technology and experience from more than 50 years of United States spaceflight experience combined with modern manufacturing processes and approaches. It will power the second stage of the fully evolved SLS Program launch vehicle that will enable a return to human exploration of space beyond low earth orbit. Pratt & Whitney Rocketdyne (PWR) is under contract to develop and produce the engine, leveraging its flight-proven LH2/LOX, gas generator cycle J-2 and RS-68 engine capabilities, recent experience with the X-33 aerospike XRS-2200 engine, and development knowledge of the J-2S tap-off cycle engine. The J- 2X employs a gas generator operating cycle designed to produce 294,000 pounds of vacuum thrust in primary operating mode with its full nozzle extension. With a truncated nozzle extension suitable to support engine clustering on the stage, the nominal vacuum thrust level in primary mode is 285,000 pounds. It also has a secondary mode, during which it operates at 80 percent thrust by altering its mixture ratio. The J-2X development philosophy is based on proven hardware, an aggressive development schedule, and early risk reduction. NASA Marshall Space Flight Center (MSFC) and PWR began development of the J-2X in June 2006. The government/industry team of more than 600 people within NASA and PWR successfully completed the Critical Design Review (CDR) in November 2008, following extensive risk mitigation testing. Assembly of the first development engine was completed in May 2011 and the first engine test was conducted at the NASA Stennis Space Center (SSC), test stand A2, on 14 July 2011. Testing of the first development engine will continue through the autumn of 2011, be paused for test stand modifications to the passive diffuser, and then restart in the spring of 2012. This testing will be followed by specialized powerpack testing intended to examine the design and operating margins of the engine turbomachinery. The development plan beyond this point leads through more system-level, engine testing of several samples, analytical model validation activities, functional and performance verification, and then ultimate certification to support human spaceflight. This paper will discuss the J-2X development background, provide top-level information on design and development planning, and will explore some of the development challenges and mitigation activities pursued to date.

  11. Scalability, Timing, and System Design Issues for Intrinsic Evolvable Hardware

    NASA Technical Reports Server (NTRS)

    Hereford, James; Gwaltney, David

    2004-01-01

    In this paper we address several issues pertinent to intrinsic evolvable hardware (EHW). The first issue is scalability; namely, how the design space scales as the programming string for the programmable device gets longer. We develop a model for population size and the number of generations as a function of the programming string length, L, and show that the number of circuit evaluations is an O(L2) process. We compare our model to several successful intrinsic EHW experiments and discuss the many implications of our model. The second issue that we address is the timing of intrinsic EHW experiments. We show that the processing time is a small part of the overall time to derive or evolve a circuit and that major improvements in processor speed alone will have only a minimal impact on improving the scalability of intrinsic EHW. The third issue we consider is the system-level design of intrinsic EHW experiments. We review what other researchers have done to break the scalability barrier and contend that the type of reconfigurable platform and the evolutionary algorithm are tied together and impose limits on each other.

  12. Evolvable Hardware for Space Applications

    NASA Technical Reports Server (NTRS)

    Lohn, Jason; Globus, Al; Hornby, Gregory; Larchev, Gregory; Kraus, William

    2004-01-01

    This article surveys the research of the Evolvable Systems Group at NASA Ames Research Center. Over the past few years, our group has developed the ability to use evolutionary algorithms in a variety of NASA applications ranging from spacecraft antenna design, fault tolerance for programmable logic chips, atomic force field parameter fitting, analog circuit design, and earth observing satellite scheduling. In some of these applications, evolutionary algorithms match or improve on human performance.

  13. Evolution of Analog Circuits on Field Programmable Transistor Arrays

    NASA Technical Reports Server (NTRS)

    Stoica, A.; Keymeulen, D.; Zebulum, R.; Thakoor, A.; Daud, T.; Klimeck, G.; Jin, Y.; Tawel, R.; Duong, V.

    2000-01-01

    Evolvable Hardware (EHW) refers to HW design and self-reconfiguration using evolutionary/genetic mechanisms. The paper presents an overview of some key concepts of EHW, describing also a set of selected applications.

  14. Solid-State Lighting Module (SSLM)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The project's goal was to build a light-emitting-diode (LED)-based light fixture that is identical in fit, form, and function to the existing International Space Station (ISS) General Luminaire Assembly (GLA) light fixture and fly it on the ISS in early FY 2008 as a Station Detailed Test Objective (SDTO). Our design offers the following strengths: proven component hardware: Our design uses components flown in other KSC-developed hardware; heat path thermal pad: LED array heat is transferred from the circuit board by silicon pad, negating the need for a cooling fan; variable colorimetry: The output light color can be changed by inserting different LED combinations.

  15. A Piloted Flight to a Near-Earth Object: A Feasibility Study

    NASA Technical Reports Server (NTRS)

    Landis, Rob; Korsmeyer, Dave; Abell, Paul; Adamo, Dan; Morrison, Dave; Lu, Ed; Lemke, Larry; Gonzales, Andy; Jones, Tom; Gershman, Bob; hide

    2007-01-01

    This viewgraph presentation examines flight hardware elements of the Constellation Program (CxP) and the utilization of the Crew Exploration Vehicle (CEV), Evolvable Expendable Launch Vehicles (EELVs) and Ares launch vehicles for NEO missions.

  16. Space Flight Operations Center local area network

    NASA Technical Reports Server (NTRS)

    Goodman, Ross V.

    1988-01-01

    The existing Mission Control and Computer Center at JPL will be replaced by the Space Flight Operations Center (SFOC). One part of the SFOC is the LAN-based distribution system. The purpose of the LAN is to distribute the processed data among the various elements of the SFOC. The SFOC LAN will provide a robust subsystem that will support the Magellan launch configuration and future project adaptation. Its capabilities include (1) a proven cable medium as the backbone for the entire network; (2) hardware components that are reliable, varied, and follow OSI standards; (3) accurate and detailed documentation for fault isolation and future expansion; and (4) proven monitoring and maintenance tools.

  17. Knowledge Provenance in Semantic Wikis

    NASA Astrophysics Data System (ADS)

    Ding, L.; Bao, J.; McGuinness, D. L.

    2008-12-01

    Collaborative online environments with a technical Wiki infrastructure are becoming more widespread. One of the strengths of a Wiki environment is that it is relatively easy for numerous users to contribute original content and modify existing content (potentially originally generated by others). As more users begin to depend on informational content that is evolving by Wiki communities, it becomes more important to track the provenance of the information. Semantic Wikis expand upon traditional Wiki environments by adding some computationally understandable encodings of some of the terms and relationships in Wikis. We have developed a semantic Wiki environment that expands a semantic Wiki with provenance markup. Provenance of original contributions as well as modifications is encoded using the provenance markup component of the Proof Markup Language. The Wiki environment provides the provenance markup automatically, thus users are not required to make specific encodings of author, contribution date, and modification trail. Further, our Wiki environment includes a search component that understands the provenance primitives and thus can be used to provide a provenance-aware search facility. We will describe the knowledge provenance infrastructure of our Semantic Wiki and show how it is being used as the foundation of our group web site as well as a number of project web sites.

  18. Markets and Models for Large-Scale Courseware Development.

    ERIC Educational Resources Information Center

    Bunderson, C. Victor

    Computer-assisted instruction (CAI) is not making an important, visible impact on the educational system of this country. Though its instructional value has been proven time after time, the high cost of the hardware and the lack of quality courseware is preventing CAI from becoming a market success. In order for CAI to reach its market potential…

  19. Tracking Provenance of Earth Science Data

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt; Yesha, Yelena; Halem, Milton

    2010-01-01

    Tremendous volumes of data have been captured, archived and analyzed. Sensors, algorithms and processing systems for transforming and analyzing the data are evolving over time. Web Portals and Services can create transient data sets on-demand. Data are transferred from organization to organization with additional transformations at every stage. Provenance in this context refers to the source of data and a record of the process that led to its current state. It encompasses the documentation of a variety of artifacts related to particular data. Provenance is important for understanding and using scientific datasets, and critical for independent confirmation of scientific results. Managing provenance throughout scientific data processing has gained interest lately and there are a variety of approaches. Large scale scientific datasets consisting of thousands to millions of individual data files and processes offer particular challenges. This paper uses the analogy of art history provenance to explore some of the concerns of applying provenance tracking to earth science data. It also illustrates some of the provenance issues with examples drawn from the Ozone Monitoring Instrument (OMI) Data Processing System (OMIDAPS) run at NASA's Goddard Space Flight Center by the first author.

  20. Space Industrialization. Volume 2: Opportunities, Markets and Programs

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The nature of space industrialization and the reasons for its promotion are examined. Increases in space industry activities to be anticipated from 1980 to 2010 are assessed. A variety of future scenarios against which space industrialization could evolve were developed and the various industrial opportunities that might constitute that evolution were defined. The needs and markets of industry activities were quantitatively and qualitatively assessed and messed. The various hardware requirements vs. time (space industry programs) as space industrialization evolves are derived and analyzed.

  1. Hardware Evolution of Analog Speed Controllers for a DC Motor

    NASA Technical Reports Server (NTRS)

    Gwaltney, David A.; Ferguson, Michael I.

    2003-01-01

    This viewgraph presentation provides information on the design of analog speed controllers for DC motors on aerospace systems. The presentation includes an overview of controller evolution, evolvable controller configuration, an emphasis on proportion integral (PI) controllers, schematic diagrams, and experimental results.

  2. Update on Risk Reduction Activities for a Liquid Advanced Booster for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Crocker, Andrew M.; Doering, Kimberly B; Meadows, Robert G.; Lariviere, Brian W.; Graham, Jerry B.

    2015-01-01

    The stated goals of NASA's Research Announcement for the Space Launch System (SLS) Advanced Booster Engineering Demonstration and/or Risk Reduction (ABEDRR) are to reduce risks leading to an affordable Advanced Booster that meets the evolved capabilities of SLS; and enable competition by mitigating targeted Advanced Booster risks to enhance SLS affordability. Dynetics, Inc. and Aerojet Rocketdyne (AR) formed a team to offer a wide-ranging set of risk reduction activities and full-scale, system-level demonstrations that support NASA's ABEDRR goals. For NASA's SLS ABEDRR procurement, Dynetics and AR formed a team to offer a series of full-scale risk mitigation hardware demonstrations for an affordable booster approach that meets the evolved capabilities of the SLS. To establish a basis for the risk reduction activities, the Dynetics Team developed a booster design that takes advantage of the flight-proven Apollo-Saturn F-1. Using NASA's vehicle assumptions for the SLS Block 2, a two-engine, F-1-based booster design delivers 150 mT (331 klbm) payload to LEO, 20 mT (44 klbm) above NASA's requirements. This enables a low-cost, robust approach to structural design. During the ABEDRR effort, the Dynetics Team has modified proven Apollo-Saturn components and subsystems to improve affordability and reliability (e.g., reduce parts counts, touch labor, or use lower cost manufacturing processes and materials). The team has built hardware to validate production costs and completed tests to demonstrate it can meet performance requirements. State-of-the-art manufacturing and processing techniques have been applied to the heritage F-1, resulting in a low recurring cost engine while retaining the benefits of Apollo-era experience. NASA test facilities have been used to perform low-cost risk-reduction engine testing. In early 2014, NASA and the Dynetics Team agreed to move additional large liquid oxygen/kerosene engine work under Dynetics' ABEDRR contract. Also led by AR, the objectives of this work are to demonstrate combustion stability and measure performance of a 500,000 lbf class Oxidizer-Rich Staged Combustion (ORSC) cycle main injector. A trade study was completed to investigate the feasibility, cost effectiveness, and technical maturity of a domestically produced Atlas V engine that could also potentially satisfy NASA SLS payload-to-orbit requirements via an advanced booster application. Engine physical dimensions and performance parameters resulting from this study provide the system level requirements for the ORSC risk reduction test article. The test article is scheduled to complete critical design review this fall and begin testing in 2017. Dynetics has also designed, developed, and built innovative tank and structure assemblies using friction stir welding to leverage recent NASA investments in manufacturing tools, facilities, and processes, significantly reducing development and recurring costs. The full-scale cryotank assembly was used to verify the structural design and prove affordable processes. Dynetics performed hydrostatic and cryothermal proof tests on the assembly to verify the assembly meets performance requirements. This paper will discuss the ABEDRR engine task and structures task achievements to date and the remaining effort through the end of the contract.

  3. Evolutionary Design of an X-Band Antenna for NASA's Space Technology 5 Mission

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Hornby, Gregory S.; Rodriguez-Arroyo, Adan; Linden, Derek S.; Kraus, William F.; Seufert, Stephen E.

    2003-01-01

    We present an evolved X-band antenna design and flight prototype currently on schedule to be deployed on NASA s Space Technology 5 spacecraft in 2004. The mission consists of three small satellites that wall take science measurements in Earth s magnetosphere. The antenna was evolved to meet a challenging set of mission requirements, most notably the combination of wide beamwidth for a circularly-polarized wave and wide bandwidth. Two genetic algorithms were used: one allowed branching an the antenna arms and the other did not. The highest performance antennas from both algorithms were fabricated and tested. A handdesigned antenna was produced by the contractor responsible for the design and build of the mission antennas. The hand-designed antenna is a quadrifilar helix, and we present performance data for comparison to the evolved antennas. As of this writing, one of our evolved antenna prototypes is undergoing flight qualification testing. If successful, the resulting antenna would represent the first evolved hardware in space, and the first deployed evolved antenna.

  4. Energy Efficient Engine (E3) combustion system component technology performance report

    NASA Technical Reports Server (NTRS)

    Burrus, D. L.; Chahrour, C. A.; Foltz, H. L.; Sabla, P. E.; Seto, S. P.; Taylor, J. R.

    1984-01-01

    The Energy Efficient Engine (E3) combustor effort was conducted as part of the overall NASA/GE E3 Program. This effort included the selection of an advanced double-annular combustion system design. The primary intent of this effort was to evolve a design that meets the stringent emissions and life goals of the E3, as well as all of the usual performance requirements of combustion systems for modern turbofan engines. Numerous detailed design studies were conducted to define the features of the combustion system design. Development test hardware was fabricated, and an extensive testing effort was undertaken to evaluate the combustion system subcomponents in order to verify and refine the design. Technology derived from this effort was incorporated into the engine combustion hardware design. The advanced engine combustion system was then evaluated in component testing to verify the design intent. What evolved from this effort was an advanced combustion system capable of satisfying all of the combustion system design objectives and requirements of the E3.

  5. Computational provenance in hydrologic science: a snow mapping example.

    PubMed

    Dozier, Jeff; Frew, James

    2009-03-13

    Computational provenance--a record of the antecedents and processing history of digital information--is key to properly documenting computer-based scientific research. To support investigations in hydrologic science, we produce the daily fractional snow-covered area from NASA's moderate-resolution imaging spectroradiometer (MODIS). From the MODIS reflectance data in seven wavelengths, we estimate the fraction of each 500 m pixel that snow covers. The daily products have data gaps and errors because of cloud cover and sensor viewing geometry, so we interpolate and smooth to produce our best estimate of the daily snow cover. To manage the data, we have developed the Earth System Science Server (ES3), a software environment for data-intensive Earth science, with unique capabilities for automatically and transparently capturing and managing the provenance of arbitrary computations. Transparent acquisition avoids the scientists having to express their computations in specific languages or schemas in order for provenance to be acquired and maintained. ES3 models provenance as relationships between processes and their input and output files. It is particularly suited to capturing the provenance of an evolving algorithm whose components span multiple languages and execution environments.

  6. Density matters: Approaches to settling ballast water discharge concentrations

    EPA Science Inventory

    A consensus has evolved that invasion risk increases with propagule pressure. However, translating this general principal into ecologically “acceptable” concentrations of organisms in ballast water has proven challenging. The treaty being promulgated by the International Maritime...

  7. Shuttle/Agena study. Annex A: Ascent agena configuration

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Details are presented on the Agena rocket vehicle description, vehicle interfaces, environmental constraints and test requirements, software programs, and ground support equipment. The basic design concept for the Ascent Agena is identified as optimization of reliability, flexibility, performance capabilities, and economy through the use of tested and flight-proven hardware. The development history of the Agenas A, B, and D is outlined and space applications are described.

  8. Applications technology satellite F&G /ATS F&G/ mobile terminal.

    NASA Technical Reports Server (NTRS)

    Greenbaum, L. A.; Baker, J. L.

    1971-01-01

    The mobile terminal is a flexible, easily transportable system. The terminal design incorporates a combination of unique and proven hardware to provide maximum utility consistent with reliability. The flexibility built into the system will make it possible to satisfy the requirements of the applications technology satellite program concerned with the conduction of various spacecraft technology experiments. The terminal includes two parabolic antennas.

  9. A New Look at NASA: Strategic Research In Information Technology

    NASA Technical Reports Server (NTRS)

    Alfano, David; Tu, Eugene (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on research undertaken by NASA to facilitate the development of information technologies. Specific ideas covered here include: 1) Bio/nano technologies: biomolecular and nanoscale systems and tools for assembly and computing; 2) Evolvable hardware: autonomous self-improving, self-repairing hardware and software for survivable space systems in extreme environments; 3) High Confidence Software Technologies: formal methods, high-assurance software design, and program synthesis; 4) Intelligent Controls and Diagnostics: Next generation machine learning, adaptive control, and health management technologies; 5) Revolutionary computing: New computational models to increase capability and robustness to enable future NASA space missions.

  10. Earth to Moon Transfer: Direct vs Via Libration Points (L1, L2)

    NASA Technical Reports Server (NTRS)

    Condon, Gerald L.; Wilson, Samuel W.

    2004-01-01

    For some three decades, the Apollo-style mission has served as a proven baseline technique for transporting flight crews to the Moon and back with expendable hardware. This approach provides an optimal design for expeditionary missions, emphasizing operational flexibility in terms of safely returning the crew in the event of a hardware failure. However, its application is limited essentially to low-latitude lunar sites, and it leaves much to be desired as a model for exploratory and evolutionary programs that employ reusable space-based hardware. This study compares the performance requirements for a lunar orbit rendezvous mission type with one using the cislunar libration point (L1) as a stopover and staging point for access to arbitrary sites on the lunar surface. For selected constraints and mission objectives, it contrasts the relative uniformity of performance cost when the L1 staging point is used with the wide variation of cost for the Apollo-style lunar orbit rendezvous.

  11. Development and verification testing of automation and robotics for assembly of space structures

    NASA Technical Reports Server (NTRS)

    Rhodes, Marvin D.; Will, Ralph W.; Quach, Cuong C.

    1993-01-01

    A program was initiated within the past several years to develop operational procedures for automated assembly of truss structures suitable for large-aperture antennas. The assembly operations require the use of a robotic manipulator and are based on the principle of supervised autonomy to minimize crew resources. A hardware testbed was established to support development and evaluation testing. A brute-force automation approach was used to develop the baseline assembly hardware and software techniques. As the system matured and an operation was proven, upgrades were incorprated and assessed against the baseline test results. This paper summarizes the developmental phases of the program, the results of several assembly tests, the current status, and a series of proposed developments for additional hardware and software control capability. No problems that would preclude automated in-space assembly of truss structures have been encountered. The current system was developed at a breadboard level and continued development at an enhanced level is warranted.

  12. Environmental Conditions for Space Flight Hardware: A Survey

    NASA Technical Reports Server (NTRS)

    Plante, Jeannette; Lee, Brandon

    2005-01-01

    Interest in generalization of the physical environment experienced by NASA hardware from the natural Earth environment (on the launch pad), man-made environment on Earth (storage acceptance an d qualification testing), the launch environment, and the space environment, is ed to find commonality among our hardware in an effort to reduce cost and complexity. NASA is entering a period of increase in its number of planetary missions and it is important to understand how our qualification requirements will evolve with and track these new environments. Environmental conditions are described for NASA projects in several ways for the different periods of the mission life cycle. At the beginning, the mission manager defines survivability requirements based on the mission length, orbit, launch date, launch vehicle, and other factors . such as the use of reactor engines. Margins are then applied to these values (temperature extremes, vibration extremes, radiation tolerances, etc,) and a new set of conditions is generalized for design requirements. Mission assurance documents will then assign an additional margin for reliability, and a third set of values is provided for during testing. A fourth set of environmental condition values may evolve intermittently from heritage hardware that has been tested to a level beyond the actual mission requirement. These various sets of environment figures can make it quite confusing and difficult to capture common hardware environmental requirements. Environmental requirement information can be found in a wide variety of places. The most obvious is with the individual projects. We can easily get answers to questions about temperature extremes being used and radiation tolerance goals, but it is more difficult to map the answers to the process that created these requirements: for design, for qualification, and for actual environment with no margin applied. Not everyone assigned to a NASA project may have that kind of insight, as many have only the environmental requirement numbers needed to do their jobs but do not necessarily have a programmatic-level understanding of how all of the environmental requirements fit together.

  13. Software Defined Radios - Architectures, Systems and Functions

    NASA Technical Reports Server (NTRS)

    Sims, Herb

    2017-01-01

    Software Defined Radio (SDR) technology has been proven in the commercial sector since the early 90's. Today's rapid advancement in mobile telephone reliability and power management capabilities exemplifies the effectiveness of the SDR technology for the modern communications market. SDR technology offers potential to revolutionize satellite transponder technology by increasing science data through-put capability by at least an order of magnitude. While the SDR is adaptive in nature and is "One-size-fits-all" by design, conventional transponders are built to a specific platform and must be redesigned for every new bus. The SDR uses a minimum amount of analog/Radio Frequency (RF) components to up/down-convert the RF signal to/from a digital format. Once analog data is digitized, all processing is performed using hardware logic. Typical SDR processes include; filtering, modulation, up/down converting and demodulation. These innovations have reduced the cost of transceivers, a decrease in power requirements and a commensurate reduction in volume. An additional pay-off is the increased flexibility of the SDR: allowing the same hardware to implement multiple transponder types by altering hardware logic -no change of analog hardware is required -all of which can be ultimately accomplished in orbit.

  14. Density matters: Approaches to settling ballast water discharge concentrations - 10/22

    EPA Science Inventory

    A consensus has evolved that invasion risk increases with propagule pressure. However, translating this general principal into ecologically “acceptable” concentrations of organisms in ballast water has proven challenging. The treaty being promulgated by the International Maritime...

  15. Polarization and studies of evolved star mass loss

    NASA Astrophysics Data System (ADS)

    Sargent, Benjamin; Srinivasan, Sundar; Riebel, David; Meixner, Margaret

    2012-05-01

    Polarization studies of astronomical dust have proven very useful in constraining its properties. Such studies are used to constrain the spatial arrangement, shape, composition, and optical properties of astronomical dust grains. Here we explore possible connections between astronomical polarization observations to our studies of mass loss from evolved stars. We are studying evolved star mass loss in the Large Magellanic Cloud (LMC) by using photometry from the Surveying the Agents of a Galaxy's Evolution (SAGE; PI: M. Meixner) Spitzer Space Telescope Legacy program. We use the radiative transfer program 2Dust to create our Grid of Red supergiant and Asymptotic giant branch ModelS (GRAMS), in order to model this mass loss. To model emission of polarized light from evolved stars, however, we appeal to other radiative transfer codes. We probe how polarization observations might be used to constrain the dust shell and dust grain properties of the samples of evolved stars we are studying.

  16. Discovery & Interaction in Astro 101 Laboratory Experiments

    NASA Astrophysics Data System (ADS)

    Maloney, Frank Patrick; Maurone, Philip; DeWarf, Laurence E.

    2016-01-01

    The availability of low-cost, high-performance computing hardware and software has transformed the manner by which astronomical concepts can be re-discovered and explored in a laboratory that accompanies an astronomy course for arts students. We report on a strategy, begun in 1992, for allowing each student to understand fundamental scientific principles by interactively confronting astronomical and physical phenomena, through direct observation and by computer simulation. These experiments have evolved as :a) the quality and speed of the hardware has greatly increasedb) the corresponding hardware costs have decreasedc) the students have become computer and Internet literated) the importance of computationally and scientifically literate arts graduates in the workplace has increased.We present the current suite of laboratory experiments, and describe the nature, procedures, and goals in this two-semester laboratory for liberal arts majors at the Astro 101 university level.

  17. Software system safety

    NASA Technical Reports Server (NTRS)

    Uber, James G.

    1988-01-01

    Software itself is not hazardous, but since software and hardware share common interfaces there is an opportunity for software to create hazards. Further, these software systems are complex, and proven methods for the design, analysis, and measurement of software safety are not yet available. Some past software failures, future NASA software trends, software engineering methods, and tools and techniques for various software safety analyses are reviewed. Recommendations to NASA are made based on this review.

  18. Independent Space Operators: Gaining a Voice in Design for Operability

    NASA Technical Reports Server (NTRS)

    McCleskey, Carey M.; Claybaugh, William R., II

    2006-01-01

    Affordable and sustainable space exploration remains an elusive goal. We explore the competitive advantages of evolving towards independent operators for space transportation in our economy. We consider the pros and cons of evolving business organizations that operate and maintain space transportation system assets independently from flight system manufacturers and from host spaceports. The case is made that a more competitive business climate for creating inherently operable, dependable, and supportable space transportation systems can evolve out of today's traditional vertical business model-a model within which the voice of the operator is often heard, but rarely acted upon during crucial design commitments and critical design processes. Thus new business models may be required, driven less by hardware consumption and more by space system utilization.

  19. An affordable RBCC-powered 2-stage small orbital payload transportation systems concept based on test-proven hardware

    NASA Astrophysics Data System (ADS)

    Escher, William J. D.

    1998-01-01

    Deriving from the initial planning activity of early 1965, which led to NASA's Advanced Space Transportation Program (ASTP), an early-available airbreathing/rocket combined propulsion system powered ``ultralight payload'' launcher was defined at the conceptual design level. This system, named the ``W Vehicle,'' was targeted to be a ``second generation'' successor to the original Bantam Lifter class, all-rocket powered systems presently being pursued by NASA and a selected set of its contractors. While this all-rocket vehicle is predicated on a fully expendable approach, the W-Vehicle system was to be a fully reusable 2-stage vehicle. The general (original) goal of the Bantam class of launchers was to orbit a 100 kg payload for a recurring per-launch cost of less than one million dollars. Reusability, as the case for larger vehicles focusing on single stage to orbit (SSTO) configurations, is considered the principal key to affordability. In the general context of a range of space transports, covering the payload range of 0.1 to 10 metric ton payloads, the W Vehicle concept-predicated mainly on ground- and flight-test proven hardware-is described in this paper, along with a nominal development schedule and budgetary estimate (recurring costs were not estimated).

  20. Testing for the J-2X Upper Stage Engine

    NASA Technical Reports Server (NTRS)

    Buzzell, James C.

    2010-01-01

    NASA selected the J-2X Upper Stage Engine in 2006 to power the upper stages of the Ares I crew launch vehicle and the Ares V cargo launch vehicle. Based on the proven Saturn J-2 engine, this new engine will provide 294,000 pounds of thrust and a specific impulse of 448 seconds, making it the most efficient gas generator cycle engine in history. The engine's guiding philosophy emerged from the Exploration Systems Architecture Study (ESAS) in 2005. Goals established then called for vehicles and components based, where feasible, on proven hardware from the Space Shuttle, commercial, and other programs, to perform the mission and provide an order of magnitude greater safety. Since that time, the team has made unprecedented progress. Ahead of the other elements of the Constellation Program architecture, the team has progressed through System Requirements Review (SRR), System Design Review (SDR), Preliminary Design Review (PDR), and Critical Design Review (CDR). As of February 2010, more than 100,000 development engine parts have been ordered and more than 18,000 delivered. Approximately 1,300 of more than 1,600 engine drawings were released for manufacturing. A major factor in the J-2X development approach to this point is testing operations of heritage J-2 engine hardware and new J-2X components to understand heritage performance, validate computer modeling of development components, mitigate risk early in development, and inform design trades. This testing has been performed both by NASA and its J-2X prime contractor, Pratt & Whitney Rocketdyne (PWR). This body of work increases the likelihood of success as the team prepares for testing the J-2X powerpack and first development engine in calendar 2011. This paper will provide highlights of J-2X testing operations, engine test facilities, development hardware, and plans.

  1. Towards a Standard for Provenance and Context for Preservation of Data for Earth System Science

    NASA Technical Reports Server (NTRS)

    Ramaprian, Hampapuram K.; Moses, John F.

    2011-01-01

    Long-term data sets with data from many missions are needed to study trends and validate model results that are typical in Earth System Science research. Data and derived products originate from multiple missions (spaceborne, airborne and/or in situ) and from multiple organizations. During the missions as well as well past their termination, it is essential to preserve the data and products to support future studies. Key aspects of preservation are: preserving bits and ensuring data are uncorrupted, preserving understandability with appropriate documentation, and preserving reproducibility of science with appropriate documentation and other artifacts. Computer technology provides adequate standards to ensure that, with proper engineering, bits are preserved as hardware evolves. However, to ensure understandability and reproducibility, it is essential to plan ahead to preserve all the relevant data and information. There are currently no standards to identify the content that needs to be preserved, leading to non-uniformity in content and users not being sure of whether preserved content is comprehensive. Each project, program or agency can specify the items to be preserved as a part of its data management requirements. However, broader community consensus that cuts across organizational or national boundaries would be needed to ensure comprehensiveness, uniformity and long-term utility of archived data. The Federation of Earth Science Information Partners (ESIP), a diverse network of scientists, data stewards and technology developers, has a forum for ESIP members to collaborate on data preservation issues. During early 2011, members discussed the importance of developing a Provenance and Context Content Standard (PCCS) and developed an initial list of content items. This list is based on the outcome of a NASA and NOAA meeting held in 1998 under the auspices of the USGCRP, documentation requirements from NOAA and our experience with some of the NASA Earth science missions. The items are categorized into the following 8 high level categories: Preflight/Pre-Operations, Products (Data), Product Documentation, Mission Calibration, Product Software, Algorithm Input, Validation, Software Tools.

  2. Launch Vehicles

    NASA Image and Video Library

    2007-09-09

    Under the goals of the Vision for Space Exploration, Ares I is a chief component of the cost-effective space transportation infrastructure being developed by NASA's Constellation Program. This transportation system will safely and reliably carry human explorers back to the moon, and then onward to Mars and other destinations in the solar system. The Ares I effort includes multiple project element teams at NASA centers and contract organizations around the nation, and is managed by the Exploration Launch Projects Office at NASA's Marshall Space Flight Center (MFSC). ATK Launch Systems near Brigham City, Utah, is the prime contractor for the first stage booster. ATK's subcontractor, United Space Alliance of Houston, is designing, developing and testing the parachutes at its facilities at NASA's Kennedy Space Center in Florida. NASA's Johnson Space Center in Houston hosts the Constellation Program and Orion Crew Capsule Project Office and provides test instrumentation and support personnel. Together, these teams are developing vehicle hardware, evolving proven technologies, and testing components and systems. Their work builds on powerful, reliable space shuttle propulsion elements and nearly a half-century of NASA space flight experience and technological advances. Ares I is an inline, two-stage rocket configuration topped by the Crew Exploration Vehicle, its service module, and a launch abort system. In this HD video image, the first stage reentry 1/2% model is undergoing pressure measurements inside the wind tunnel testing facility at MSFC. (Highest resolution available)

  3. Developing an Intelligent Computer-Aided Trainer

    NASA Technical Reports Server (NTRS)

    Hua, Grace

    1990-01-01

    The Payload-assist module Deploys/Intelligent Computer-Aided Training (PD/ICAT) system was developed as a prototype for intelligent tutoring systems with the intention of seeing PD/ICAT evolve and produce a general ICAT architecture and development environment that can be adapted by a wide variety of training tasks. The proposed architecture is composed of a user interface, a domain expert, a training session manager, a trainee model and a training scenario generator. The PD/ICAT prototype was developed in the LISP environment. Although it has been well received by its peers and users, it could not be delivered toe its end users for practical use because of specific hardware and software constraints. To facilitate delivery of PD/ICAT to its users and to prepare for a more widely accepted development and delivery environment for future ICAT applications, we have ported this training system to a UNIX workstation and adopted use of a conventional language, C, and a C-based rule-based language, CLIPS. A rapid conversion of the PD/ICAT expert system to CLIPS was possible because the knowledge was basically represented as a forward chaining rule base. The resulting CLIPS rule base has been tested successfully in other ICATs as well. Therefore, the porting effort has proven to be a positive step toward our ultimate goal of building a general purpose ICAT development environment.

  4. Bioreactors for plant cells: hardware configuration and internal environment optimization as tools for wider commercialization.

    PubMed

    Georgiev, Milen I; Weber, Jost

    2014-07-01

    Mass production of value-added molecules (including native and heterologous therapeutic proteins and enzymes) by plant cell culture has been demonstrated as an efficient alternative to classical technologies [i.e. natural harvest and chemical (semi)synthesis]. Numerous proof-of-concept studies have demonstrated the feasibility of scaling up plant cell culture-based processes (most notably to produce paclitaxel) and several commercial processes have been established so far. The choice of a suitable bioreactor design (or modification of an existing commercially available reactor) and the optimization of its internal environment have been proven as powerful tools toward successful mass production of desired molecules. This review highlights recent progress (mostly in the last 5 years) in hardware configuration and optimization of bioreactor culture conditions for suspended plant cells.

  5. Learning Computer Hardware by Doing: Are Tablets Better than Desktops?

    ERIC Educational Resources Information Center

    Raven, John; Qalawee, Mohamed; Atroshi, Hanar

    2016-01-01

    In this world of rapidly evolving technologies, educational institutions often struggle to keep up with change. Change often requires a state of readiness at both the micro and macro levels. This paper looks at a tertiary institution that undertook a significant technology change initiative by introducing tablet based components for teaching a…

  6. Identifying Predictors of Achievement in the Newly Defined Information Literacy: A Neural Network Analysis

    ERIC Educational Resources Information Center

    Sexton, Randall; Hignite, Michael; Margavio, Thomas M.; Margavio, Geanie W.

    2009-01-01

    Information Literacy is a concept that evolved as a result of efforts to move technology-based instructional and research efforts beyond the concepts previously associated with "computer literacy." While computer literacy was largely a topic devoted to knowledge of hardware and software, information literacy is concerned with students' abilities…

  7. The "Intelligent Classroom": Changing Teaching and Learning with an Evolving Technological Environment.

    ERIC Educational Resources Information Center

    Winer, Laura R.; Cooperstock, Jeremy

    2002-01-01

    Describes the development and use of the Intelligent Classroom collaborative project at McGill University that explored technology use to improve teaching and learning. Explains the hardware and software installation that allows for the automated capture of audio, video, slides, and handwritten annotations during a live lecture, with subsequent…

  8. Low-level rf control of Spallation Neutron Source: System and characterization

    NASA Astrophysics Data System (ADS)

    Ma, Hengjie; Champion, Mark; Crofford, Mark; Kasemir, Kay-Uwe; Piller, Maurice; Doolittle, Lawrence; Ratti, Alex

    2006-03-01

    The low-level rf control system currently commissioned throughout the Spallation Neutron Source (SNS) LINAC evolved from three design iterations over 1 yr intensive research and development. Its digital hardware implementation is efficient, and has succeeded in achieving a minimum latency of less than 150 ns which is the key for accomplishing an all-digital feedback control for the full bandwidth. The control bandwidth is analyzed in frequency domain and characterized by testing its transient response. The hardware implementation also includes the provision of a time-shared input channel for a superior phase differential measurement between the cavity field and the reference. A companion cosimulation system for the digital hardware was developed to ensure a reliable long-term supportability. A large effort has also been made in the operation software development for the practical issues such as the process automations, cavity filling, beam loading compensation, and the cavity mechanical resonance suppression.

  9. Environmental Controls and Life Support System (ECLSS) Design for a Multi-Mission Space Exploration Vehicle (MMSEV)

    NASA Technical Reports Server (NTRS)

    Stambaugh, Imelda; Baccus, Shelley; Buffington, Jessie; Hood, Andrew; Naids, Adam; Borrego, Melissa; Hanford, Anthony J.; Eckhardt, Brad; Allada, Rama Kumar; Yagoda, Evan

    2013-01-01

    Engineers at Johnson Space Center (JSC) are developing an Environmental Control and Life Support System (ECLSS) design for the Multi-Mission Space Exploration Vehicle (MMSEV). The purpose of the MMSEV is to extend the human exploration envelope for Lunar, Near Earth Object (NEO), or Deep Space missions by using pressurized exploration vehicles. The MMSEV, formerly known as the Space Exploration Vehicle (SEV), employs ground prototype hardware for various systems and tests it in manned and unmanned configurations. Eventually, the system hardware will evolve and become part of a flight vehicle capable of supporting different design reference missions. This paper will discuss the latest MMSEV ECLSS architectures developed for a variety of design reference missions, any work contributed toward the development of the ECLSS design, lessons learned from testing prototype hardware, and the plan to advance the ECLSS toward a flight design.

  10. Los Alamos radiation transport code system on desktop computing platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Briesmeister, J.F.; Brinkley, F.W.; Clark, B.A.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. These codes were originally developed many years ago and have undergone continual improvement. With a large initial effort and continued vigilance, the codes are easily portable from one type of hardware to another. The performance of scientific work-stations (SWS) has evolved to the point that such platforms can be used routinely to perform sophisticated radiation transport calculations. As the personal computer (PC) performance approaches that of the SWS, the hardware options for desk-top radiation transport calculations expands considerably. Themore » current status of the radiation transport codes within the LARTCS is described: MCNP, SABRINA, LAHET, ONEDANT, TWODANT, TWOHEX, and ONELD. Specifically, the authors discuss hardware systems on which the codes run and present code performance comparisons for various machines.« less

  11. Design Tools for Reconfigurable Hardware in Orbit (RHinO)

    NASA Technical Reports Server (NTRS)

    French, Mathew; Graham, Paul; Wirthlin, Michael; Larchev, Gregory; Bellows, Peter; Schott, Brian

    2004-01-01

    The Reconfigurable Hardware in Orbit (RHinO) project is focused on creating a set of design tools that facilitate and automate design techniques for reconfigurable computing in space, using SRAM-based field-programmable-gate-array (FPGA) technology. These tools leverage an established FPGA design environment and focus primarily on space effects mitigation and power optimization. The project is creating software to automatically test and evaluate the single-event-upsets (SEUs) sensitivities of an FPGA design and insert mitigation techniques. Extensions into the tool suite will also allow evolvable algorithm techniques to reconfigure around single-event-latchup (SEL) events. In the power domain, tools are being created for dynamic power visualiization and optimization. Thus, this technology seeks to enable the use of Reconfigurable Hardware in Orbit, via an integrated design tool-suite aiming to reduce risk, cost, and design time of multimission reconfigurable space processors using SRAM-based FPGAs.

  12. Environmental Controls and Life Support System (ECLSS) Design for a Multi-Mission Space Exploration Vehicle (MMSEV)

    NASA Technical Reports Server (NTRS)

    Stambaugh, Imelda; Baccus, Shelley; Naids, Adam; Hanford, Anthony

    2012-01-01

    Engineers at Johnson Space Center (JSC) are developing an Environmental Control and Life Support System (ECLSS) design for the Multi-Mission Space Exploration Vehicle (MMSEV). The purpose of the MMSEV is to extend the human exploration envelope for Lunar, Near Earth Object (NEO), or Deep Space missions by using pressurized exploration vehicles. The MMSEV, formerly known as the Space Exploration Vehicle (SEV), employs ground prototype hardware for various systems and tests it in manned and unmanned configurations. Eventually, the system hardware will evolve and become part of a flight vehicle capable of supporting different design reference missions. This paper will discuss the latest MMSEV ECLSS architectures developed for a variety of design reference missions, any work contributed toward the development of the ECLSS design, lessons learned from testing prototype hardware, and the plan to advance the ECLSS toward a flight design.

  13. Economics of the solid rocket booster for space shuttle

    NASA Technical Reports Server (NTRS)

    Rice, W. C.

    1979-01-01

    The paper examines economics of the solid rocket booster for the Space Shuttle. Costs have been held down by adapting existing technology to the 146 in. SRB selected, with NASA reducing the cost of expendables and reusing the expensive nonexpendable hardware. Drop tests of Titan III motor cases and nozzles proved that boosters can survive water impact at vertical velocities of 100 ft/sec so that SRB components can be reused. The cost of expendables was minimized by selecting proven propellants, insulation, and nozzle ablatives of known costs; the propellant has the lowest available cost formulation, and low cost ablatives, such as pitch carbon fibers, will be used when available. Thus, the use of proven technology and low cost expendables will make the SRB an economical booster for the Space Shuttle.

  14. Recent Developments in Hardware-in-the-Loop Formation Navigation and Control

    NASA Technical Reports Server (NTRS)

    Mitchell, Jason W.; Luquette, Richard J.

    2005-01-01

    The Formation Flying Test-Bed (FFTB) at NASA Goddard Space Flight Center (GSFC) provides a hardware-in-the-loop test environment for formation navigation and control. The facility is evolving as a modular, hybrid, dynamic simulation facility for end-tc-end guidance, navigation, and control (GN&C) design and analysis of formation flying spacecraft. The core capabilities of the FFTB, as a platform for testing critical hardware and software algorithms in-the-loop, are reviewed with a focus on many recent improvements. Two significant upgrades to the FFTB are a message-oriented middleware (MOM) architecture, and a software crosslink for inter-spacecraft ranging. The MOM architecture provides a common messaging bus for software agents, easing integration, arid supporting the GSFC Mission Services Evolution Center (GMSEC) architecture via software bridge. Additionally, the FFTB s hardware capabilities are expanding. Recently, two Low-Power Transceivers (LPTs) with ranging capability have been introduced into the FFTB. The LPT crosslinks will be connected to a modified Crosslink Channel Simulator (CCS), which applies realistic space-environment effects to the Radio Frequency (RF) signals produced by the LPTs.

  15. Development and Application of a Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Fulton, Christopher E.; Maul, William A.; Sowers, T. Shane

    2007-01-01

    This paper describes the development and initial demonstration of a Portable Health Algorithms Test (PHALT) System that is being developed by researchers at the NASA Glenn Research Center (GRC). The PHALT System was conceived as a means of evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT System allows systems health management algorithms to be developed in a graphical programming environment; to be tested and refined using system simulation or test data playback; and finally, to be evaluated in a real-time hardware-in-the-loop mode with a live test article. In this paper, PHALT System development is described through the presentation of a functional architecture, followed by the selection and integration of hardware and software. Also described is an initial real-time hardware-in-the-loop demonstration that used sensor data qualification algorithms to diagnose and isolate simulated sensor failures in a prototype Power Distribution Unit test-bed. Success of the initial demonstration is highlighted by the correct detection of all sensor failures and the absence of any real-time constraint violations.

  16. Superconducting Optoelectronic Circuits for Neuromorphic Computing

    NASA Astrophysics Data System (ADS)

    Shainline, Jeffrey M.; Buckley, Sonia M.; Mirin, Richard P.; Nam, Sae Woo

    2017-03-01

    Neural networks have proven effective for solving many difficult computational problems, yet implementing complex neural networks in software is computationally expensive. To explore the limits of information processing, it is necessary to implement new hardware platforms with large numbers of neurons, each with a large number of connections to other neurons. Here we propose a hybrid semiconductor-superconductor hardware platform for the implementation of neural networks and large-scale neuromorphic computing. The platform combines semiconducting few-photon light-emitting diodes with superconducting-nanowire single-photon detectors to behave as spiking neurons. These processing units are connected via a network of optical waveguides, and variable weights of connection can be implemented using several approaches. The use of light as a signaling mechanism overcomes fanout and parasitic constraints on electrical signals while simultaneously introducing physical degrees of freedom which can be employed for computation. The use of supercurrents achieves the low power density (1 mW /cm2 at 20-MHz firing rate) necessary to scale to systems with enormous entropy. Estimates comparing the proposed hardware platform to a human brain show that with the same number of neurons (1 011) and 700 independent connections per neuron, the hardware presented here may achieve an order of magnitude improvement in synaptic events per second per watt.

  17. Provenance evolution in the northern South China Sea and its implication of paleo-drainage systems from Eocene to Miocene

    NASA Astrophysics Data System (ADS)

    Cui, Y.; Shao, L.; Qiao, P.

    2017-12-01

    Geochemistry analysis and detrital zircon U-Pb geochronology aim to fully investigate the "source to sink" patterns of northern South China Sea (SCS) from Eocene to Miocene. Evolutional history of the surrounding drainage system has been highly focused on, in comparison to sedimentary characteristics of the SCS basins. Rapid local provenances were prevailed while large-scale fluvial transport remained to evolve during Eocene. Since early Oligocene, sediments from the South China were more abundantly delivered to the northeastern Pearl River Mouth Basin in addition to Dongsha volcanism supplement. Aside from intrabasinal provenances, long-distance transport started to play significant role in Zhu1 Depression, possibly reaching western and southern Baiyun Sag, partially. Western Qiongdongnan Basin might accept sediments from central Vietnam with its eastern area more affected from Hainan Island and Southern Uplift. In the late Oligocene, due to drastic sea-level changes and rapid exhumation, mafic to altramafic sediments were transported in abundance to Central Depression from Kontum Massif, while multiple provenances casted integrated influence on eastern sedimentary sequences. Southern Baiyun Sag was also affected by an increased supplement from the west Shenhu Uplift or even central Vietnam. Overall pattern did not change greatly since early Miocene, but long-distance transport has become dominant in the northern SCS. Under controlled by regional tectonic cycles, Pearl River gradually evolved into the present scale and exerted its influence on basinal provenances by several stages. Zhu1 Depression was partially delivered sediments from its tributaries in early Oligocene while northern Zhu2 Depression has not been provided abundant materials until late Oligocene. Meanwhile, although detailed transportation routine remains uncertain and controversial, an impressive paleo-channel spanning the whole Qiongdongnan Basin was presumed to supply huge amount of mafic to ultramafic sediments from central Vietnam drainage systems to the further eastern regions, which even reaching Baiyun Sag since the late Oligocene or even earlier. This unique channel was later likely to be replaced by the adjacent provenance of Hainan Island after early Miocene.

  18. Polymorphic Electronic Circuits

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian

    2004-01-01

    Polymorphic electronics is a nascent technological discipline that involves, among other things, designing the same circuit to perform different analog and/or digital functions under different conditions. For example, a circuit can be designed to function as an OR gate or an AND gate, depending on the temperature (see figure). Polymorphic electronics can also be considered a subset of polytronics, which is a broader technological discipline in which optical and possibly other information- processing systems could also be designed to perform multiple functions. Polytronics is an outgrowth of evolvable hardware (EHW). The basic concepts and some specific implementations of EHW were described in a number of previous NASA Tech Briefs articles. To recapitulate: The essence of EHW is to design, construct, and test a sequence of populations of circuits that function as incrementally better solutions of a given design problem through the selective, repetitive connection and/or disconnection of capacitors, transistors, amplifiers, inverters, and/or other circuit building blocks. The evolution is guided by a search-and-optimization algorithm (in particular, a genetic algorithm) that operates in the space of possible circuits to find a circuit that exhibits an acceptably close approximation of the desired functionality. The evolved circuits can be tested by computational simulation (in which case the evolution is said to be extrinsic), tested in real hardware (in which case the evolution is said to be intrinsic), or tested in random sequences of computational simulation and real hardware (in which case the evolution is said to be mixtrinsic).

  19. Solid Rocket Booster (SRB) Flight System Integration at Its Best

    NASA Technical Reports Server (NTRS)

    Wood, T. David; Kanner, Howard S.; Freeland, Donna M.; Olson, Derek T.

    2011-01-01

    The Solid Rocket Booster (SRB) element integrates all the subsystems needed for ascent flight, entry, and recovery of the combined Booster and Motor system. These include the structures, avionics, thrust vector control, pyrotechnic, range safety, deceleration, thermal protection, and retrieval systems. This represents the only human-rated, recoverable and refurbishable solid rocket ever developed and flown. Challenges included subsystem integration, thermal environments and severe loads (including water impact), sometimes resulting in hardware attrition. Several of the subsystems evolved during the program through design changes. These included the thermal protection system, range safety system, parachute/recovery system, and others. Because the system was recovered, the SRB was ideal for data and imagery acquisition, which proved essential for understanding loads, environments and system response. The three main parachutes that lower the SRBs to the ocean are the largest parachutes ever designed, and the SRBs are the largest structures ever to be lowered by parachutes. SRB recovery from the ocean was a unique process and represented a significant operational challenge; requiring personnel, facilities, transportation, and ground support equipment. The SRB element achieved reliability via extensive system testing and checkout, redundancy management, and a thorough postflight assessment process. However, the in-flight data and postflight assessment process revealed the hardware was affected much more strongly than originally anticipated. Assembly and integration of the booster subsystems required acceptance testing of reused hardware components for each build. Extensive testing was done to assure hardware functionality at each level of stage integration. Because the booster element is recoverable, subsystems were available for inspection and testing postflight, unique to the Shuttle launch vehicle. Problems were noted and corrective actions were implemented as needed. The postflight assessment process was quite detailed and a significant portion of flight operations. The SRBs provided fully redundant critical systems including thrust vector control, mission critical pyrotechnics, avionics, and parachute recovery system. The design intent was to lift off with full redundancy. On occasion, the redundancy management scheme was needed during flight operations. This paper describes some of the design challenges and technical issues, how the design evolved with time, and key areas where hardware reusability contributed to improved system level understanding.

  20. ALS rocket engine combustion devices design and demonstration

    NASA Technical Reports Server (NTRS)

    Arreguin, Steve

    1989-01-01

    Work performed during Phase one is summarized and the significant technical and programmatic accomplishments occurring during this period are documented. Besides a summary of the results, methodologies, trade studies, design, fabrication, and hardware conditions; the following are included: the evolving Maintainability Plan, Reliability Program Plan, Failure Summary and Analysis Report, and the Failure Mode and Effect Analysis.

  1. The Blended Learning Shift: New Report Shows Blended Learning Growing in U.S. Private Schools

    ERIC Educational Resources Information Center

    Warren, Travis

    2015-01-01

    The technology conversation in independent schools has evolved considerably over the last five years. In particular, it has moved beyond the question of how can schools augment traditional classroom practices with hardware (laptops, interactive whiteboards, etc.) to the question of how software can improve outcomes and enable new learning models,…

  2. NASA Docking System (NDS) Technical Integration Meeting

    NASA Technical Reports Server (NTRS)

    Lewis, James L.

    2010-01-01

    This slide presentation reviews the NASA Docking System (NDS) as NASA's implementation of the International Docking System Standard (IDSS). The goals of the NDS, is to build on proven technologies previously demonstrated in flight and to advance the state of the art of docking systems by incorporating Low Impact Docking System (LIDS) technology into the NDS. A Hardware Demonstration was included in the meeting, and there was discussion about software, NDS major system interfaces, integration information, schedule, and future upgrades.

  3. Human factors in the Naval Air Systems Command: Computer based training

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seamster, T.L.; Snyder, C.E.; Terranova, M.

    1988-01-01

    Military standards applied to the private sector contracts have a substantial effect on the quality of Computer Based Training (CBT) systems procured for the Naval Air Systems Command. This study evaluated standards regulating the following areas in CBT development and procurement: interactive training systems, cognitive task analysis, and CBT hardware. The objective was to develop some high-level recommendations for evolving standards that will govern the next generation of CBT systems. One of the key recommendations is that there be an integration of the instructional systems development, the human factors engineering, and the software development standards. Recommendations were also made formore » task analysis and CBT hardware standards. (9 refs., 3 figs.)« less

  4. Data management system advanced development

    NASA Technical Reports Server (NTRS)

    Douglas, Katherine; Humphries, Terry

    1990-01-01

    The Data Management System (DMS) Advanced Development task provides for the development of concepts, new tools, DMS services, and for the testing of the Space Station DMS hardware and software. It also provides for the development of techniques capable of determining the effects of system changes/enhancements, additions of new technology, and/or hardware and software growth on system performance. This paper will address the built-in characteristics which will support network monitoring requirements in the design of the evolving DMS network implementation, functional and performance requirements for a real-time, multiprogramming, multiprocessor operating system, and the possible use of advanced development techniques such as expert systems and artificial intelligence tools in the DMS design.

  5. The IRIS-GUS Shuttle Borne Upper Stage System

    NASA Technical Reports Server (NTRS)

    Tooley, Craig; Houghton, Martin; Bussolino, Luigi; Connors, Paul; Broudeur, Steve (Technical Monitor)

    2002-01-01

    This paper describes the Italian Research Interim Stage - Gyroscopic Upper Stage (IRIS-GUS) upper stage system that will be used to launch NASA's Triana Observatory from the Space Shuttle. Triana is a pathfinder earth science mission being executed on rapid schedule and small budget, therefore the mission's upper stage solution had to be a system that could be fielded quickly at relatively low cost and risk. The building of the IRIS-GUS system wa necessary because NASA lost the capability to launch moderately sized upper stage missions fro the Space Shuttle when the PAM-D system was retired. The IRIS-GUS system restores this capability. The resulting system is a hybrid which mates the existing, flight proven IRIS (Italian Research Interim Stage) airborne support equipment to a new upper stage, the Gyroscopic Upper Stage (GUS) built by the GSFC for Triana. Although a new system, the GUS exploits flight proven hardware and design approaches in most subsystems, in some cases implementing proven design approaches with state-of-the-art electronics. This paper describes the IRIS-GUS upper stage system elements, performance capabilities, and payload interfaces.

  6. Solar-terrestrial data access distribution and archiving

    NASA Technical Reports Server (NTRS)

    1984-01-01

    It is recommended that a central data catalog and data access network (CDC/DAN) for solar-terrestrial research be established, initially as a NASA pilot program. The system is envisioned to be flexible and to evolve as funds permit, starting from a catalog to an access network for high-resolution data. The report describes the various functional requirements for the CDC/DAN, but does not specify the hardware and software architectures as these are constantly evolving. The importance of a steering committee, working with the CDC/DAN organization, to provide scientific guidelines for the data catalog and for data storage, access, and distribution is also stressed.

  7. Urban ecosystems: What would Tansley do?

    Treesearch

    Steward T. A. Pickett; J. M. Grove

    2009-01-01

    The ecosystem concept was introduced in ecology originally to solve problems associated with theories of succession and ecological communities. It has evolved to become one of ecology's fundamental ideas, and has proven to be applicable to a wide variety of research questions and applications. However, there is controversy about whether or how well the ecosystem...

  8. Provenance-Based Approaches to Semantic Web Service Discovery and Usage

    ERIC Educational Resources Information Center

    Narock, Thomas William

    2012-01-01

    The World Wide Web Consortium defines a Web Service as "a software system designed to support interoperable machine-to-machine interaction over a network." Web Services have become increasingly important both within and across organizational boundaries. With the recent advent of the Semantic Web, web services have evolved into semantic…

  9. Starsat: A space astronomy facility

    NASA Technical Reports Server (NTRS)

    Hamilton, E. C.; Mundie, C. E.; Korsch, D.; Love, R. A.; Fuller, F. S.; Parker, J. R.; Fritz, C. G.; White, R. E.; Giudici, R. J.

    1976-01-01

    Preliminary design and analyses of a versatile telescope for Spacelab missions are presented. The system is an all-reflective Korsch three-mirror telescope with excellent performance characteristics over a wide field and a broad spectral range, making it particularly suited for ultraviolet observations. The system concept is evolved around the utilization of existing hardware and designs which were developed for other astronomy space projects.

  10. User assembly and servicing system for Space Station, an evolving architecture approach

    NASA Technical Reports Server (NTRS)

    Lavigna, Thomas A.; Cline, Helmut P.

    1988-01-01

    On-orbit assembly and servicing of a variety of scientific and applications hardware systems is expected to be one of the Space Station's primary functions. The hardware to be serviced will include the attached payloads resident on the Space Station, the free-flying satellites and co-orbiting platforms brought to the Space Station, and the polar orbiting platforms. The requirements for assembly and servicing such a broad spectrum of missions have led to the development of an Assembly and Servicing System Architecture that is composed of a complex array of support elements. This array is comprised of US elements, both Space Station and non-Space Station, and elements provided by Canada to the Space Station Program. For any given servicing or assembly mission, the necessary support elements will be employed in an integrated manner to satisfy the mission-specific needs. The structure of the User Assembly and Servicing System Architecture and the manner in which it will evolved throughout the duration of the phased Space Station Program are discussed. Particular emphasis will be placed upon the requirements to be accommodated in each phase, and the development of a logical progression of capabilities to meet these requirements.

  11. NASA's Space Launch System Program Update

    NASA Technical Reports Server (NTRS)

    May, Todd; Lyles, Garry

    2015-01-01

    Hardware and software for the world's most powerful launch vehicle for exploration is being welded, assembled, and tested today in high bays, clean rooms and test stands across the United States. NASA's Space Launch System (SLS) continued to make significant progress in the past year, including firing tests of both main propulsion elements, manufacturing of flight hardware, and the program Critical Design Review (CDR). Developed with the goals of safety, affordability, and sustainability, SLS will deliver unmatched capability for human and robotic exploration. The initial Block 1 configuration will deliver more than 70 metric tons (t) (154,000 pounds) of payload to low Earth orbit (LEO). The evolved Block 2 design will deliver some 130 t (286,000 pounds) to LEO. Both designs offer enormous opportunity and flexibility for larger payloads, simplifying payload design as well as ground and on-orbit operations, shortening interplanetary transit times, and decreasing overall mission risk. Over the past year, every vehicle element has manufactured or tested hardware, including flight hardware for Exploration Mission 1 (EM-1). This paper will provide an overview of the progress made over the past year and provide a glimpse of upcoming milestones on the way to a 2018 launch readiness date.

  12. Training basic laparoscopic skills using a custom-made video game.

    PubMed

    Goris, Jetse; Jalink, Maarten B; Ten Cate Hoedemaker, Henk O

    2014-09-01

    Video games are accepted and used for a wide variety of applications. In the medical world, research on the positive effects of playing games on basic laparoscopic skills is rapidly increasing. Although these benefits have been proven several times, no institution actually uses video games for surgical training. This Short Communication describes some of the theoretical backgrounds, development and underlying educational foundations of a specifically designed video game and custom-made hardware that takes advantage of the positive effects of games on basic laparoscopic skills.

  13. Design requirements for SRB production control system. Volume 3: Package evaluation, modification and hardware

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The software package evaluation was designed to analyze commercially available, field-proven, production control or manufacturing resource planning management technology and software package. The analysis was conducted by comparing SRB production control software requirements and conceptual system design to software package capabilities. The methodology of evaluation and the findings at each stage of evaluation are described. Topics covered include: vendor listing; request for information (RFI) document; RFI response rate and quality; RFI evaluation process; and capabilities versus requirements.

  14. Development of a Radio Frequency Space Environment Path Emulator for Evaluating Spacecraft Ranging Hardware

    NASA Technical Reports Server (NTRS)

    Mitchell, Jason W.; Baldwin, Philip J.; Kurichh, Rishi; Naasz, Bo J.; Luquette, Richard J.

    2007-01-01

    The Formation Flying Testbed (FFTB) at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) provides a hardware-in-the-loop test environment for formation navigation and control. The facility is evolving as a modular, hybrid, dynamic simulation facility for end-to-end guidance, navigation and. control (GN&C) design and analysis of formation flying spacecraft. The core capabilities of the FFTB, as a platform for testing critical hardware and software algorithms in-the-loop, have expanded to include S-band Radio Frequency (RF) modems for inter-spacecraft communication and ranging. To enable realistic simulations that require RF ranging sensors for relative navigation, a mechanism is needed to buffer the RF signals exchanged between spacecraft that accurately emulates the dynamic environment through which the RF signals travel, including the effects of medium, moving platforms, and radiated power. The Path Emulator for RF Signals (PERFS), currently under development at NASA GSFC, provides this capability. The function and performance of a prototype device are presented.

  15. Characterization of a Prototype Radio Frequency Space Environment Path Emulator for Evaluating Spacecraft Ranging Hardware

    NASA Technical Reports Server (NTRS)

    Mitchell, Jason W.; Baldwin, Philip J.; Kurichh, Rishi; Naasz, Bo J.; Luquette, Richard J.

    2007-01-01

    The Formation Flying Testbed (FFTB) at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) provides a hardware-in-the-loop test environment for formation navigation and control. The facility is evolving as a modular, hybrid, dynamic simulation facility for end-to-end guidance, navigation and control (GN&C) design and analysis of formation flying spacecraft. The core capabilities of the FFTB, as a platform for testing critical hardware and software algorithms in-the-loop, have expanded to include S-band Radio Frequency (RF) modems for interspacecraft communication and ranging. To enable realistic simulations that require RF ranging sensors for relative navigation, a mechanism is needed to buffer the RF signals exchanged between spacecraft that accurately emulates the dynamic environment through which the RF signals travel, including the effects of the medium, moving platforms, and radiated power. The Path Emulator for Radio Frequency Signals (PERFS), currently under development at NASA GSFC, provides this capability. The function and performance of a prototype device are presented.

  16. Spacelab dedicated discipline laboratory (DDL) utilization concept

    NASA Technical Reports Server (NTRS)

    Wunsch, P.; De Sanctis, C.

    1984-01-01

    The dedicated discipline laboratory (DDL) concept is a new approach for implementing Spacelab missions that involves the grouping of science instruments into mission complements of single or compatible disciplines. These complements are evolved in such a way that the DDL payloads can be left intact between flights. This requires the dedication of flight hardware to specific payloads on a long-term basis and raises the concern that the purchase of additional flight hardware will be required to implement the DDL program. However, the payoff is expected to result in significant savings in mission engineering and assembly effort. A study has been conducted recently to quantify both the requirements for new hardware and the projected mission cost savings. It was found that some incremental additions to the current inventory will be needed to fly the mission model assumed. Cost savings of $2M to 6.5M per mission were projected in areas analyzed in depth, and additional savings may occur in areas for which detailed cost data were not available.

  17. NASA Operational Simulator for Small Satellites: Tools for Software Based Validation and Verification of Small Satellites

    NASA Technical Reports Server (NTRS)

    Grubb, Matt

    2016-01-01

    The NASA Operational Simulator for Small Satellites (NOS3) is a suite of tools to aid in areas such as software development, integration test (IT), mission operations training, verification and validation (VV), and software systems check-out. NOS3 provides a software development environment, a multi-target build system, an operator interface-ground station, dynamics and environment simulations, and software-based hardware models. NOS3 enables the development of flight software (FSW) early in the project life cycle, when access to hardware is typically not available. For small satellites there are extensive lead times on many of the commercial-off-the-shelf (COTS) components as well as limited funding for engineering test units (ETU). Considering the difficulty of providing a hardware test-bed to each developer tester, hardware models are modeled based upon characteristic data or manufacturers data sheets for each individual component. The fidelity of each hardware models is such that FSW executes unaware that physical hardware is not present. This allows binaries to be compiled for both the simulation environment, and the flight computer, without changing the FSW source code. For hardware models that provide data dependent on the environment, such as a GPS receiver or magnetometer, an open-source tool from NASA GSFC (42 Spacecraft Simulation) is used to provide the necessary data. The underlying infrastructure used to transfer messages between FSW and the hardware models can also be used to monitor, intercept, and inject messages, which has proven to be beneficial for VV of larger missions such as James Webb Space Telescope (JWST). As hardware is procured, drivers can be added to the environment to enable hardware-in-the-loop (HWIL) testing. When strict time synchronization is not vital, any number of combinations of hardware components and software-based models can be tested. The open-source operator interface used in NOS3 is COSMOS from Ball Aerospace. For testing, plug-ins are implemented in COSMOS to control the NOS3 simulations, while the command and telemetry tools available in COSMOS are used to communicate with FSW. NOS3 is actively being used for FSW development and component testing of the Simulation-to-Flight 1 (STF-1) CubeSat. As NOS3 matures, hardware models have been added for common CubeSat components such as Novatel GPS receivers, ClydeSpace electrical power systems and batteries, ISISpace antenna systems, etc. In the future, NASA IVV plans to distribute NOS3 to other CubeSat developers and release the suite to the open-source community.

  18. Ares I Upper Stage Pressure Tests in Wind Tunnel

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Under the goals of the Vision for Space Exploration, Ares I is a chief component of the cost-effective space transportation infrastructure being developed by NASA's Constellation Program. This transportation system will safely and reliably carry human explorers back to the moon, and then onward to Mars and other destinations in the solar system. The Ares I effort includes multiple project element teams at NASA centers and contract organizations around the nation, and is managed by the Exploration Launch Projects Office at NASA's Marshall Space Flight Center (MFSC). ATK Launch Systems near Brigham City, Utah, is the prime contractor for the first stage booster. ATK's subcontractor, United Space Alliance of Houston, is designing, developing and testing the parachutes at its facilities at NASA's Kennedy Space Center in Florida. NASA's Johnson Space Center in Houston hosts the Constellation Program and Orion Crew Capsule Project Office and provides test instrumentation and support personnel. Together, these teams are developing vehicle hardware, evolving proven technologies, and testing components and systems. Their work builds on powerful, reliable space shuttle propulsion elements and nearly a half-century of NASA space flight experience and technological advances. Ares I is an inline, two-stage rocket configuration topped by the Crew Exploration Vehicle, its service module, and a launch abort system. In this HD video image, the first stage reentry 1/2% model is undergoing pressure measurements inside the wind tunnel testing facility at MSFC. (Highest resolution available)

  19. Launch Vehicles

    NASA Image and Video Library

    2007-08-09

    Under the goals of the Vision for Space Exploration, Ares I is a chief component of the cost-effective space transportation infrastructure being developed by NASA's Constellation Program. This transportation system will safely and reliably carry human explorers back to the moon, and then onward to Mars and other destinations in the solar system. The Ares I effort includes multiple project element teams at NASA centers and contract organizations around the nation, and is managed by the Exploration Launch Projects Office at NASA's Marshall Space Flight Center (MFSC). ATK Launch Systems near Brigham City, Utah, is the prime contractor for the first stage booster. ATK's subcontractor, United Space Alliance of Houston, is designing, developing and testing the parachutes at its facilities at NASA's Kennedy Space Center in Florida. NASA's Johnson Space Center in Houston hosts the Constellation Program and Orion Crew Capsule Project Office and provides test instrumentation and support personnel. Together, these teams are developing vehicle hardware, evolving proven technologies, and testing components and systems. Their work builds on powerful, reliable space shuttle propulsion elements and nearly a half-century of NASA space flight experience and technological advances. Ares I is an inline, two-stage rocket configuration topped by the Crew Exploration Vehicle, its service module, and a launch abort system. This HD video image depicts confidence testing of a manufactured aluminum panel that will fabricate the Ares I upper stage barrel. In this test, bent aluminum is stressed to breaking point and thoroughly examined. The panels are manufactured by AMRO Manufacturing located in El Monte, California. (Highest resolution available)

  20. Launch Vehicles

    NASA Image and Video Library

    2007-07-09

    Under the goals of the Vision for Space Exploration, Ares I is a chief component of the cost-effective space transportation infrastructure being developed by NASA's Constellation Program. This transportation system will safely and reliably carry human explorers back to the moon, and then onward to Mars and other destinations in the solar system. The Ares I effort includes multiple project element teams at NASA centers and contract organizations around the nation, and is managed by the Exploration Launch Projects Office at NASA's Marshall Space Flight Center (MFSC). ATK Launch Systems near Brigham City, Utah, is the prime contractor for the first stage booster. ATK's subcontractor, United Space Alliance of Houston, is designing, developing and testing the parachutes at its facilities at NASA's Kennedy Space Center in Florida. NASA's Johnson Space Center in Houston hosts the Constellation Program and Orion Crew Capsule Project Office and provides test instrumentation and support personnel. Together, these teams are developing vehicle hardware, evolving proven technologies, and testing components and systems. Their work builds on powerful, reliable space shuttle propulsion elements and nearly a half-century of NASA space flight experience and technological advances. Ares I is an inline, two-stage rocket configuration topped by the Crew Exploration Vehicle, its service module, and a launch abort system. In this HD video image, an Ares I x-test involves the upper stage separating from the first stage. This particular test was conducted at the NASA Langley Research Center in July 2007. (Highest resolution available)

  1. Launch Vehicles

    NASA Image and Video Library

    2007-08-09

    Under the goals of the Vision for Space Exploration, Ares I is a chief component of the cost-effective space transportation infrastructure being developed by NASA's Constellation Program. This transportation system will safely and reliably carry human explorers back to the moon, and then onward to Mars and other destinations in the solar system. The Ares I effort includes multiple project element teams at NASA centers and contract organizations around the nation, and is managed by the Exploration Launch Projects Office at NASA's Marshall Space Flight Center (MFSC). ATK Launch Systems near Brigham City, Utah, is the prime contractor for the first stage booster. ATK's subcontractor, United Space Alliance of Houston, is designing, developing and testing the parachutes at its facilities at NASA's Kennedy Space Center in Florida. NASA's Johnson Space Center in Houston hosts the Constellation Program and Orion Crew Capsule Project Office and provides test instrumentation and support personnel. Together, these teams are developing vehicle hardware, evolving proven technologies, and testing components and systems. Their work builds on powerful, reliable space shuttle propulsion elements and nearly a half-century of NASA space flight experience and technological advances. Ares I is an inline, two-stage rocket configuration topped by the Crew Exploration Vehicle, its service module, and a launch abort system. In this HD video image, processes for upper stage barrel fabrication are talking place. Aluminum panels are manufacturing process demonstration articles that will undergo testing until perfected. The panels are built by AMRO Manufacturing located in El Monte, California. (Largest resolution available)

  2. Launch Vehicles

    NASA Image and Video Library

    2007-08-09

    Under the goals of the Vision for Space Exploration, Ares I is a chief component of the cost-effective space transportation infrastructure being developed by NASA's Constellation Program. This transportation system will safely and reliably carry human explorers back to the moon, and then onward to Mars and other destinations in the solar system. The Ares I effort includes multiple project element teams at NASA centers and contract organizations around the nation, and is managed by the Exploration Launch Projects Office at NASA's Marshall Space Flight Center (MFSC). ATK Launch Systems near Brigham City, Utah, is the prime contractor for the first stage booster. ATK's subcontractor, United Space Alliance of Houston, is designing, developing and testing the parachutes at its facilities at NASA's Kennedy Space Center in Florida. NASA's Johnson Space Center in Houston hosts the Constellation Program and Orion Crew Capsule Project Office and provides test instrumentation and support personnel. Together, these teams are developing vehicle hardware, evolving proven technologies, and testing components and systems. Their work builds on powerful, reliable space shuttle propulsion elements and nearly a half-century of NASA space flight experience and technological advances. Ares I is an inline, two-stage rocket configuration topped by the Crew Exploration Vehicle, its service module, and a launch abort system. This HD video image depicts the manufacturing of aluminum panels that will be used to form the Ares I barrel. The panels are manufacturing process demonstration articles that will undergo testing until perfected. The panels are built by AMRO Manufacturing located in El Monte, California. (Highest resolution available)

  3. n/a

    NASA Image and Video Library

    2007-08-09

    Under the goals of the Vision for Space Exploration, Ares I is a chief component of the cost-effective space transportation infrastructure being developed by NASA's Constellation Program. This transportation system will safely and reliably carry human explorers back to the moon, and then onward to Mars and other destinations in the solar system. The Ares I effort includes multiple project element teams at NASA centers and contract organizations around the nation, and is managed by the Exploration Launch Projects Office at NASA's Marshall Space Flight Center (MFSC). ATK Launch Systems near Brigham City, Utah, is the prime contractor for the first stage booster. ATK's subcontractor, United Space Alliance of Houston, is designing, developing and testing the parachutes at its facilities at NASA's Kennedy Space Center in Florida. NASA's Johnson Space Center in Houston hosts the Constellation Program and Orion Crew Capsule Project Office and provides test instrumentation and support personnel. Together, these teams are developing vehicle hardware, evolving proven technologies, and testing components and systems. Their work builds on powerful, reliable space shuttle propulsion elements and nearly a half-century of NASA space flight experience and technological advances. Ares I is an inline, two-stage rocket configuration topped by the Crew Exploration Vehicle, its service module, and a launch abort system. This HD video image depicts a manufactured panel that will be used for the Ares I upper stage barrel fabrication. The aluminum panels are manufacturing process demonstration articles that will undergo testing until perfected. The panels are built by AMRO Manufacturing located in El Monte, California. (Highest resolution available)

  4. FPGA-based protein sequence alignment : A review

    NASA Astrophysics Data System (ADS)

    Isa, Mohd. Nazrin Md.; Muhsen, Ku Noor Dhaniah Ku; Saiful Nurdin, Dayana; Ahmad, Muhammad Imran; Anuar Zainol Murad, Sohiful; Nizam Mohyar, Shaiful; Harun, Azizi; Hussin, Razaidi

    2017-11-01

    Sequence alignment have been optimized using several techniques in order to accelerate the computation time to obtain the optimal score by implementing DP-based algorithm into hardware such as FPGA-based platform. During hardware implementation, there will be performance challenges such as the frequent memory access and highly data dependent in computation process. Therefore, investigation in processing element (PE) configuration where involves more on memory access in load or access the data (substitution matrix, query sequence character) and the PE configuration time will be the main focus in this paper. There are various approaches to enhance the PE configuration performance that have been done in previous works such as by using serial configuration chain and parallel configuration chain i.e. the configuration data will be loaded into each PEs sequentially and simultaneously respectively. Some researchers have proven that the performance using parallel configuration chain has optimized both the configuration time and area.

  5. Open-Source 3-D Platform for Low-Cost Scientific Instrument Ecosystem.

    PubMed

    Zhang, C; Wijnen, B; Pearce, J M

    2016-08-01

    The combination of open-source software and hardware provides technically feasible methods to create low-cost, highly customized scientific research equipment. Open-source 3-D printers have proven useful for fabricating scientific tools. Here the capabilities of an open-source 3-D printer are expanded to become a highly flexible scientific platform. An automated low-cost 3-D motion control platform is presented that has the capacity to perform scientific applications, including (1) 3-D printing of scientific hardware; (2) laboratory auto-stirring, measuring, and probing; (3) automated fluid handling; and (4) shaking and mixing. The open-source 3-D platform not only facilities routine research while radically reducing the cost, but also inspires the creation of a diverse array of custom instruments that can be shared and replicated digitally throughout the world to drive down the cost of research and education further. © 2016 Society for Laboratory Automation and Screening.

  6. Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  7. Modular and Reusable Power System Design for the BRRISON Balloon Telescope

    NASA Astrophysics Data System (ADS)

    Truesdale, Nicholas A.

    High altitude balloons are emerging as low-cost alternatives to orbital satellites in the field of telescopic observation. The near-space environment of balloons allows optics to perform near their diffraction limit. In practice, this implies that a telescope similar to the Hubble Space Telescope could be flown for a cost of tens of millions as opposed to billions. While highly feasible, the design of a balloon telescope to rival Hubble is limited by funding. Until a prototype is proven and more support for balloon science is gained, projects remain limited in both hardware costs and man hours. Thus, to effectively create and support balloon payloads, engineering designs must be efficient, modular, and if possible reusable. This thesis focuses specifically on a modular power system design for the BRRISON comet-observing balloon telescope. Time- and cost-saving techniques are developed that can be used for future missions. A modular design process is achieved through the development of individual circuit elements that span a wide range of capabilities. Circuits for power conversion, switching and sensing are designed to be combined in any configuration. These include DC-DC regulators, MOSFET drivers for switching, isolated switches, current sensors and voltage sensing ADCs. Emphasis is also given to commercially available hardware. Pre-fabricated DC-DC converters and an Arduino microcontroller simplify the design process and offer proven, cost-effective performance. The design of the BRRISON power system is developed from these low-level circuits elements. A board for main power distribution supports the majority of flight electronics, and is extensible to additional hardware in future applications. An ATX computer power supply is developed, allowing the use of a commercial ATX motherboard as the flight computer. The addition of new capabilities is explored in the form of a heater control board. Finally, the power system as a whole is described, and its overall performance analyzed. The success of the BRRISON power system during testing and flight proves its utility, both for BRRISON and for future balloon telescopes.

  8. Building a GPS Receiver for Space Lessons Learned

    NASA Technical Reports Server (NTRS)

    Sirotzky, Steve; Heckler, G. W.; Boegner, G.; Roman, J.; Wennersten, M.; Butler, R.; Davis, M.; Lanham, A.; Winternitz, L.; Thompson, W.; hide

    2008-01-01

    Over the past 4 years the Component Systems and Hardware branch at NASA GSFC has pursued an inhouse effort to build a unique space-flight GPS receiver. This effort has resulted in the Navigator GPS receiver. Navigator's first flight opportunity will come with the STS-125 HST-SM4 mission in August 2008. This paper covers the overall hardware design for the receiver and the difficulties encountered during the transition from the breadboard design to the final flight hardware design. Among the different lessons learned, the paper stresses the importance of selecting and verifying parts that are appropriate for space applications, as well as what happens when these parts are not accurately characterized by their datasheets. Additionally, the paper discusses what analysis needs to be performed when deciding system frequencies and filters. The presentation also covers how to prepare for thermal vacuum testing, and problems that may arise during vibration testing. It also contains what criteria should be considered when determining which portions of a design to create in-house, and which portions to license from a third party. Finally, the paper shows techniques which have proven to be extraordinarily helpful in debugging and analysis.

  9. GPU-based stochastic-gradient optimization for non-rigid medical image registration in time-critical applications

    NASA Astrophysics Data System (ADS)

    Bhosale, Parag; Staring, Marius; Al-Ars, Zaid; Berendsen, Floris F.

    2018-03-01

    Currently, non-rigid image registration algorithms are too computationally intensive to use in time-critical applications. Existing implementations that focus on speed typically address this by either parallelization on GPU-hardware, or by introducing methodically novel techniques into CPU-oriented algorithms. Stochastic gradient descent (SGD) optimization and variations thereof have proven to drastically reduce the computational burden for CPU-based image registration, but have not been successfully applied in GPU hardware due to its stochastic nature. This paper proposes 1) NiftyRegSGD, a SGD optimization for the GPU-based image registration tool NiftyReg, 2) random chunk sampler, a new random sampling strategy that better utilizes the memory bandwidth of GPU hardware. Experiments have been performed on 3D lung CT data of 19 patients, which compared NiftyRegSGD (with and without random chunk sampler) with CPU-based elastix Fast Adaptive SGD (FASGD) and NiftyReg. The registration runtime was 21.5s, 4.4s and 2.8s for elastix-FASGD, NiftyRegSGD without, and NiftyRegSGD with random chunk sampling, respectively, while similar accuracy was obtained. Our method is publicly available at https://github.com/SuperElastix/NiftyRegSGD.

  10. Launch Vehicle Demonstrator Using Shuttle Assets

    NASA Technical Reports Server (NTRS)

    Threet, Grady E., Jr.; Creech, Dennis M.; Philips, Alan D.; Water, Eric D.

    2011-01-01

    The Marshall Space Flight Center Advanced Concepts Office (ACO) has the leading role for NASA s preliminary conceptual launch vehicle design and performance analysis. Over the past several years the ACO Earth-to-Orbit Team has evaluated thousands of launch vehicle concept variations for a multitude of studies including agency-wide efforts such as the Exploration Systems Architecture Study (ESAS), Constellation, Heavy Lift Launch Vehicle (HLLV), Heavy Lift Propulsion Technology (HLPT), Human Exploration Framework Team (HEFT), and Space Launch System (SLS). NASA plans to continue human space exploration and space station utilization. Launch vehicles used for heavy lift cargo and crew will be needed. One of the current leading concepts for future heavy lift capability is an inline one and a half stage concept using solid rocket boosters (SRB) and based on current Shuttle technology and elements. Potentially, the quickest and most cost-effective path towards an operational vehicle of this configuration is to make use of a demonstrator vehicle fabricated from existing shuttle assets and relying upon the existing STS launch infrastructure. Such a demonstrator would yield valuable proof-of-concept data and would provide a working test platform allowing for validated systems integration. Using shuttle hardware such as existing RS-25D engines and partial MPS, propellant tanks derived from the External Tank (ET) design and tooling, and four-segment SRB s could reduce the associated upfront development costs and schedule when compared to a concept that would rely on new propulsion technology and engine designs. There are potentially several other additional benefits to this demonstrator concept. Since a concept of this type would be based on man-rated flight proven hardware components, this demonstrator has the potential to evolve into the first iteration of heavy lift crew or cargo and serve as a baseline for block upgrades. This vehicle could also serve as a demonstration and test platform for the Orion Program. Critical spacecraft systems, re-entry and recovery systems, and launch abort systems of Orion could also be demonstrated in early test flights of the launch vehicle demo. Furthermore, an early demonstrator of this type would provide a stop-gap for retaining critical human capital and infrastructure while affording the current emerging generation of young engineers opportunity to work with and capture lessons learned from existing STS program offices and personnel, who were integral in the design and development of the Space Shuttle before these resources are no longer available. The objective of this study is to define candidate launch vehicle demonstration concepts that are based on Space Shuttle assets and determine their performance capabilities and how these demonstration vehicles could evolve to a heavy lift capability to low earth orbit.

  11. Examining Evolving Performance on the Force Concept Inventory Using Factor Analysis

    ERIC Educational Resources Information Center

    Semak, M. R.; Dietz, R. D.; Pearson, R. H.; Willis, C. W

    2017-01-01

    The application of factor analysis to the "Force Concept Inventory" (FCI) has proven to be problematic. Some studies have suggested that factor analysis of test results serves as a helpful tool in assessing the recognition of Newtonian concepts by students. Other work has produced at best ambiguous results. For the FCI administered as a…

  12. Tried and True: Tested Ideas for Teaching and Learning from the Regional Educational Laboratories.

    ERIC Educational Resources Information Center

    Levinson, Luna; Stonehill, Robert

    This collection of 16 tested ideas for improving teaching and learning evolved from the work of the 1995 Proven Laboratory Practices Task Force charged with identifying and collecting the best and most useful work from the Regional Educational Laboratories. The Regional Educational Laboratory program is the largest research and development…

  13. Remote Attitude Measurement Sensor (RAMS)

    NASA Technical Reports Server (NTRS)

    Davis, H. W.

    1989-01-01

    Remote attitude measurement sensor (RAMS) offers a low-cost, low-risk, proven design concept that is based on mature, demonstrated space sensor technology. The electronic design concepts and interpolation algorithms were tested and proven in space hardware like th Retroreflector Field Tracker and various star trackers. The RAMS concept is versatile and has broad applicability to both ground testing and spacecraft needs. It is ideal for use as a precision laboratory sensor for structural dynamics testing. It requires very little set-up or preparation time and the output data is immediately usable without integration or extensive analysis efforts. For on-orbit use, RAMS rivals any other type of dynamic structural sensor (accelerometer, lidar, photogrammetric techniques, etc.) for overall performance, reliability, suitability, and cost. Widespread acceptance and extensive usage of RAMS will occur only after some interested agency, such as OAST, adopts the RAMS concept and provides the funding support necessary for further development and implementation of RAMS for a specific program.

  14. Around Marshall

    NASA Image and Video Library

    2006-07-14

    A model of the new Aries I crew launch vehicle, for which NASA is designing, testing and evaluating hardware and related systems, is seen here on display at the Marshall Space Fight Center (MSFC), in Huntsville, Alabama. The Ares I crew launch vehicle is the rocket that will carry a new generation of space explorers into orbit. Under the goals of the Vision for Space Exploration, Ares I is a chief component of the cost-effective space transportation infrastructure being developed by NASA’s Constellation Program. These transportation systems will safely and reliably carry human explorers back to the moon, and then onward to Mars and other destinations in the solar system. The Ares I effort includes multiple project element teams at NASA centers and contract organizations around the nation, and is led by the Exploration Launch Projects Office at NASA’s MFSC. Together, these teams are developing vehicle hardware, evolving proven technologies, and testing components and systems. Their work builds on powerful, reliable space shuttle propulsion elements and nearly a half-century of NASA space flight experience and technological advances. Ares I is an inline, two-stage rocket configuration topped by the Crew Exploration Vehicle, its service module and a launch abort system. The launch vehicle’s first stage is a single, five-segment reusable solid rocket booster derived from the Space Shuttle Program’s reusable solid rocket motor that burns a specially formulated and shaped solid propellant called polybutadiene acrylonitrile (PBAN). The second or upper stage will be propelled by a J-2X main engine fueled with liquid oxygen and liquid hydrogen. In addition to its primary mission of carrying crews of four to six astronauts to Earth orbit, the launch vehicle’s 25-ton payload capacity might be used for delivering cargo to space, bringing resources and supplies to the International Space Station or dropping payloads off in orbit for retrieval and transport to exploration teams on the moon. Crew transportation to the space station is planned to begin no later than 2014. The first lunar excursion is scheduled for the 2020 timeframe.

  15. A theory of viscoplasticity accounting for internal damage

    NASA Technical Reports Server (NTRS)

    Freed, A. D.; Robinson, D. N.

    1988-01-01

    A constitutive theory for use in structural and durability analyses of high temperature isotropic alloys is presented. Constitutive equations based upon a potential function are determined from conditions of stability and physical considerations. The theory is self-consistent; terms are not added in an ad hoc manner. It extends a proven viscoplastic model by introducing the Kachanov-Rabotnov concept of net stress. Material degradation and inelastic deformation are unified; they evolve simultaneously and interactively. Both isotropic hardening and material degradation evolve with dissipated work which is the sum of inelastic work and internal work. Internal work is a continuum measure of the stored free energy resulting from inelastic deformation.

  16. Designing for Reliability and Robustness

    NASA Technical Reports Server (NTRS)

    Svetlik, Randall G.; Moore, Cherice; Williams, Antony

    2017-01-01

    Long duration spaceflight has a negative effect on the human body, and exercise countermeasures are used on-board the International Space Station (ISS) to minimize bone and muscle loss, combatting these effects. Given the importance of these hardware systems to the health of the crew, this equipment must continue to be readily available. Designing spaceflight exercise hardware to meet high reliability and availability standards has proven to be challenging throughout the time the crewmembers have been living on ISS beginning in 2000. Furthermore, restoring operational capability after a failure is clearly time-critical, but can be problematic given the challenges of troubleshooting the problem from 220 miles away. Several best-practices have been leveraged in seeking to maximize availability of these exercise systems, including designing for robustness, implementing diagnostic instrumentation, relying on user feedback, and providing ample maintenance and sparing. These factors have enhanced the reliability of hardware systems, and therefore have contributed to keeping the crewmembers healthy upon return to Earth. This paper will review the failure history for three spaceflight exercise countermeasure systems identifying lessons learned that can help improve future systems. Specifically, the Treadmill with Vibration Isolation and Stabilization System (TVIS), Cycle Ergometer with Vibration Isolation and Stabilization System (CEVIS), and the Advanced Resistive Exercise Device (ARED) will be reviewed, analyzed, and conclusions identified so as to provide guidance for improving future exercise hardware designs. These lessons learned, paired with thorough testing, offer a path towards reduced system down-time.

  17. Software-codec-based full motion video conferencing on the PC using visual pattern image sequence coding

    NASA Astrophysics Data System (ADS)

    Barnett, Barry S.; Bovik, Alan C.

    1995-04-01

    This paper presents a real time full motion video conferencing system based on the Visual Pattern Image Sequence Coding (VPISC) software codec. The prototype system hardware is comprised of two personal computers, two camcorders, two frame grabbers, and an ethernet connection. The prototype system software has a simple structure. It runs under the Disk Operating System, and includes a user interface, a video I/O interface, an event driven network interface, and a free running or frame synchronous video codec that also acts as the controller for the video and network interfaces. Two video coders have been tested in this system. Simple implementations of Visual Pattern Image Coding and VPISC have both proven to support full motion video conferencing with good visual quality. Future work will concentrate on expanding this prototype to support the motion compensated version of VPISC, as well as encompassing point-to-point modem I/O and multiple network protocols. The application will be ported to multiple hardware platforms and operating systems. The motivation for developing this prototype system is to demonstrate the practicality of software based real time video codecs. Furthermore, software video codecs are not only cheaper, but are more flexible system solutions because they enable different computer platforms to exchange encoded video information without requiring on-board protocol compatible video codex hardware. Software based solutions enable true low cost video conferencing that fits the `open systems' model of interoperability that is so important for building portable hardware and software applications.

  18. Qualification of Engineering Camera for Long-Duration Deep Space Missions

    NASA Technical Reports Server (NTRS)

    Ramesham, Rajeshuni; Maki, Justin N.; Pourangi, Ali M.; Lee, Steven W.

    2012-01-01

    Qualification and verification of advanced electronic packaging and interconnect technologies, and various other types of hardware elements for the Mars Exploration Rover s Spirit and Opportunity (MER)/Mars Science Laboratory (MSL) flight projects, has been performed to enhance the mission assurance. The qualification of hardware (engineering camera) under extreme cold temperatures has been performed with reference to various Mars-related project requirements. The flight-like packages, sensors, and subassemblies have been selected for the study to survive three times the total number of expected diurnal temperature cycles resulting from all environmental and operational exposures occurring over the life of the flight hardware, including all relevant manufacturing, ground operations, and mission phases. Qualification has been performed by subjecting above flight-like hardware to the environmental temperature extremes, and assessing any structural failures or degradation in electrical performance due to either overstress or thermal cycle fatigue. Engineering camera packaging designs, charge-coupled devices (CCDs), and temperature sensors were successfully qualified for MER and MSL per JPL design principles. Package failures were observed during qualification processes and the package redesigns were then made to enhance the reliability and subsequent mission assurance. These results show the technology certainly is promising for MSL, and especially for longterm extreme temperature missions to the extreme temperature conditions. The engineering camera has been completely qualified for the MSL project, with the proven ability to survive on Mars for 2010 sols, or 670 sols times three. Finally, the camera continued to be functional, even after 2010 thermal cycles.

  19. Fast interactive elastic registration of 12-bit multi-spectral images with subvoxel accuracy using display hardware

    NASA Astrophysics Data System (ADS)

    Noordmans, Herke Jan; de Roode, Rowland; Verdaasdonk, Rudolf

    2007-03-01

    Multi-spectral images of human tissue taken in-vivo often contain image alignment problems as patients have difficulty in retaining their posture during the acquisition time of 20 seconds. Previously, it has been attempted to correct motion errors with image registration software developed for MR or CT data but these algorithms have been proven to be too slow and erroneous for practical use with multi-spectral images. A new software package has been developed which allows the user to play a decisive role in the registration process as the user can monitor the progress of the registration continuously and force it in the right direction when it starts to fail. The software efficiently exploits videocard hardware to gain speed and to provide a perfect subvoxel correspondence between registration field and display. An 8 bit graphic card was used to efficiently register and resample 12 bit images using the hardware interpolation modes present on the graphic card. To show the feasibility of this new registration process, the software was applied in clinical practice evaluating the dosimetry for psoriasis and KTP laser treatment. The microscopic differences between images of normal skin and skin exposed to UV light proved that an affine registration step including zooming and slanting is critical for a subsequent elastic match to have success. The combination of user interactive registration software with optimal addressing the potentials of PC video card hardware greatly improves the speed of multi spectral image registration.

  20. Human-Robot Interaction: Status and Challenges.

    PubMed

    Sheridan, Thomas B

    2016-06-01

    The current status of human-robot interaction (HRI) is reviewed, and key current research challenges for the human factors community are described. Robots have evolved from continuous human-controlled master-slave servomechanisms for handling nuclear waste to a broad range of robots incorporating artificial intelligence for many applications and under human supervisory control. This mini-review describes HRI developments in four application areas and what are the challenges for human factors research. In addition to a plethora of research papers, evidence of success is manifest in live demonstrations of robot capability under various forms of human control. HRI is a rapidly evolving field. Specialized robots under human teleoperation have proven successful in hazardous environments and medical application, as have specialized telerobots under human supervisory control for space and repetitive industrial tasks. Research in areas of self-driving cars, intimate collaboration with humans in manipulation tasks, human control of humanoid robots for hazardous environments, and social interaction with robots is at initial stages. The efficacy of humanoid general-purpose robots has yet to be proven. HRI is now applied in almost all robot tasks, including manufacturing, space, aviation, undersea, surgery, rehabilitation, agriculture, education, package fetch and delivery, policing, and military operations. © 2016, Human Factors and Ergonomics Society.

  1. NASA's Earth Science Data Systems Standards Process Experiences

    NASA Technical Reports Server (NTRS)

    Ullman, Richard E.; Enloe, Yonsook

    2007-01-01

    NASA has impaneled several internal working groups to provide recommendations to NASA management on ways to evolve and improve Earth Science Data Systems. One of these working groups is the Standards Process Group (SPC). The SPG is drawn from NASA-funded Earth Science Data Systems stakeholders, and it directs a process of community review and evaluation of proposed NASA standards. The working group's goal is to promote interoperability and interuse of NASA Earth Science data through broader use of standards that have proven implementation and operational benefit to NASA Earth science by facilitating the NASA management endorsement of proposed standards. The SPC now has two years of experience with this approach to identification of standards. We will discuss real examples of the different types of candidate standards that have been proposed to NASA's Standards Process Group such as OPeNDAP's Data Access Protocol, the Hierarchical Data Format, and Open Geospatial Consortium's Web Map Server. Each of the three types of proposals requires a different sort of criteria for understanding the broad concepts of "proven implementation" and "operational benefit" in the context of NASA Earth Science data systems. We will discuss how our Standards Process has evolved with our experiences with the three candidate standards.

  2. Techniques of EMG signal analysis: detection, processing, classification and applications

    PubMed Central

    Hussain, M.S.; Mohd-Yasin, F.

    2006-01-01

    Electromyography (EMG) signals can be used for clinical/biomedical applications, Evolvable Hardware Chip (EHW) development, and modern human computer interaction. EMG signals acquired from muscles require advanced methods for detection, decomposition, processing, and classification. The purpose of this paper is to illustrate the various methodologies and algorithms for EMG signal analysis to provide efficient and effective ways of understanding the signal and its nature. We further point up some of the hardware implementations using EMG focusing on applications related to prosthetic hand control, grasp recognition, and human computer interaction. A comparison study is also given to show performance of various EMG signal analysis methods. This paper provides researchers a good understanding of EMG signal and its analysis procedures. This knowledge will help them develop more powerful, flexible, and efficient applications. PMID:16799694

  3. Reconfigurable HIL Testing of Earth Satellites

    NASA Technical Reports Server (NTRS)

    2008-01-01

    In recent years, hardware-in-the-loop (HIL) testing has carved a strong niche in several industries, such as automotive, aerospace, telecomm, and consumer electronics. As desktop computers have realized gains in speed, memory size, and data storage capacity, hardware/software platforms have evolved into high performance, deterministic HIL platforms, capable of hosting the most demanding applications for testing components and subsystems. Using simulation software to emulate the digital and analog I/O signals of system components, engineers of all disciplines can now test new systems in realistic environments to evaluate their function and performance prior to field deployment. Within the Aerospace industry, space-borne satellite systems are arguably some of the most demanding in terms of their requirement for custom engineering and testing. Typically, spacecraft are built one or few at a time to fulfill a space science or defense mission. In contrast to other industries that can amortize the cost of HIL systems over thousands, even millions of units, spacecraft HIL systems have been built as one-of-a-kind solutions, expensive in terms of schedule, cost, and risk, to assure satellite and spacecraft systems reliability. The focus of this paper is to present a new approach to HIL testing for spacecraft systems that takes advantage of a highly flexible hardware/software architecture based on National Instruments PXI reconfigurable hardware and virtual instruments developed using LabVIEW. This new approach to HIL is based on a multistage/multimode spacecraft bus emulation development model called Reconfigurable Hardware In-the-Loop or RHIL.

  4. Experiences in the development of rotary joints for robotic manipulators in space applications

    NASA Technical Reports Server (NTRS)

    Priesett, Klaus

    1992-01-01

    European developments in robotics for space applications have resulted in human arm-like manipulators with six or more rotational degrees of freedom. The rotary joints including their own electromechanical actuator and feedback sensors must be very compact units. The specific joint concept is presented as evolved so far. The problems encountered during the first hardware development phases are covered on both component and joint level.

  5. Fast Algorithms for Mining Co-evolving Time Series

    DTIC Science & Technology

    2011-09-01

    Keogh et al., 2001, 2004] and (b) forecasting, like an autoregressive integrated moving average model ( ARIMA ) and related meth- ods [Box et al., 1994...computing hardware? We develop models to mine time series with missing values, to extract compact representation from time sequences, to segment the...sequences, and to do forecasting. For large scale data, we propose algorithms for learning time series models , in particular, including Linear Dynamical

  6. Organizing the HIV vaccine development effort.

    PubMed

    Voronin, Yegor; Snow, William

    2013-09-01

    To describe and compare the diverse organizational structures and funding mechanisms applied to advance HIV preventive vaccine research and development and to help explain and inform evolving infrastructures and collaborative funding models. On the basis of models that have been tried, improved or abandoned over three decades, the field seems to have settled into a relatively stable set of diverse initiatives, each with its own organizational signature. At the same time, this set of organizations is forging cross-organizational collaborations, which promise to acquire newly emergent beneficial properties. Strong motivation to expedite HIV vaccine R&D has driven a diversity of customized and inventive organizational approaches, largely government and foundation funded. Although no one approach has proven a panacea, the field has evolved into a constellation of often overlapping organizations that complement or reinforce one another. The Global HIV Vaccine Enterprise, a responsive, rapidly evolving loose infrastructure, is an innovative collaboration to catalyze that evolution.

  7. Organizing the HIV Vaccine Development Effort

    PubMed Central

    Voronin, Yegor; Snow, William

    2014-01-01

    Purpose of Review Describe and compare the diverse organizational structures and funding mechanisms applied to advance HIV preventive vaccine research and development, to help explain and inform evolving infrastructures and collaborative funding models. Recent Findings Based on models that have been tried, improved or abandoned over three decades, the field seems to have settled into a relatively stable set of diverse initiatives, each with its own organizational signature. At the same time, this set of organizations is forging cross-organizational collaborations, which promise to acquire newly emergent beneficial properties. Summary Strong motivation to expedite HIV vaccine R&D has driven a diversity of customized and inventive organizational approaches, largely government and foundation funded. While no one approach has proven a panacea, the field has evolved into a constellation of often overlapping organizations that complement or reinforce one another. The Global HIV Vaccine Enterprise, a responsive, rapidly evolving loose infrastructure, is an innovative collaboration to catalyze that evolution. PMID:23924997

  8. Power Hardware-in-the-Loop Evaluation of PV Inverter Grid Support on Hawaiian Electric Feeders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Austin A; Prabakar, Kumaraguru; Nagarajan, Adarsh

    As more grid-connected photovoltaic (PV) inverters become compliant with evolving interconnections requirements, there is increased interest from utilities in understanding how to best deploy advanced grid-support functions (GSF) in the field. One efficient and cost-effective method to examine such deployment options is to leverage power hardware-in-the-loop (PHIL) testing methods, which combine the fidelity of hardware tests with the flexibility of computer simulation. This paper summarizes a study wherein two Hawaiian Electric feeder models were converted to real-time models using an OPAL-RT real-time digital testing platform, and integrated with models of GSF capable PV inverters based on characterization test data. Themore » integrated model was subsequently used in PHIL testing to evaluate the effects of different fixed power factor and volt-watt control settings on voltage regulation of the selected feeders using physical inverters. Selected results are presented in this paper, and complete results of this study were provided as inputs for field deployment and technical interconnection requirements for grid-connected PV inverters on the Hawaiian Islands.« less

  9. Modeling and Simulation of the Economics of Mining in the Bitcoin Market.

    PubMed

    Cocco, Luisanna; Marchesi, Michele

    2016-01-01

    In January 3, 2009, Satoshi Nakamoto gave rise to the "Bitcoin Blockchain", creating the first block of the chain hashing on his computer's central processing unit (CPU). Since then, the hash calculations to mine Bitcoin have been getting more and more complex, and consequently the mining hardware evolved to adapt to this increasing difficulty. Three generations of mining hardware have followed the CPU's generation. They are GPU's, FPGA's and ASIC's generations. This work presents an agent-based artificial market model of the Bitcoin mining process and of the Bitcoin transactions. The goal of this work is to model the economy of the mining process, starting from GPU's generation, the first with economic significance. The model reproduces some "stylized facts" found in real-time price series and some core aspects of the mining business. In particular, the computational experiments performed can reproduce the unit root property, the fat tail phenomenon and the volatility clustering of Bitcoin price series. In addition, under proper assumptions, they can reproduce the generation of Bitcoins, the hashing capability, the power consumption, and the mining hardware and electrical energy expenditures of the Bitcoin network.

  10. Expanding Hardware-in-the-Loop Formation Navigation and Control with Radio Frequency Crosslink Ranging

    NASA Technical Reports Server (NTRS)

    Mitchell, Jason W.; Barbee, Brent W.; Baldwin, Philip J.; Luquette, Richard J.

    2007-01-01

    The Formation Flying Testbed (FFTB) at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) provides a hardware-in-the-loop test environment for formation navigation and control. The facility continues to evolve as a modular, hybrid, dynamic simulation facility for end-to-end guidance, navigation, and control (GN&C) design and analysis of formation flying spacecraft. The core capabilities of the FFTB, as a platform for testing critical hardware and software algorithms in-the-loop, are reviewed with a focus on recent improvements. With the most recent improvement, in support of Technology Readiness Level (TRL) 6 testing of the Inter-spacecraft Ranging and Alarm System (IRAS) for the Magnetospheric Multiscale (MMS) mission, the FFTB has significantly expanded its ability to perform realistic simulations that require Radio Frequency (RF) ranging sensors for relative navigation with the Path Emulator for RF Signals (PERFS). The PERFS, currently under development at NASA GSFC, modulates RF signals exchanged between spacecraft. The RF signals are modified to accurately reflect the dynamic environment through which they travel, including the effects of medium, moving platforms, and radiated power.

  11. Space Launch System Spacecraft and Payload Elements: Making Progress Toward First Launch

    NASA Technical Reports Server (NTRS)

    Schorr, Andrew A.; Creech, Stephen D.

    2016-01-01

    Significant and substantial progress continues to be accomplished in the design, development, and testing of the Space Launch System (SLS), the most powerful human-rated launch vehicle the United States has ever undertaken. Designed to support human missions into deep space, SLS is one of three programs being managed by the National Aeronautics and Space Administration's (NASA's) Exploration Systems Development directorate. The Orion spacecraft program is developing a new crew vehicle that will support human missions beyond low Earth orbit, and the Ground Systems Development and Operations program is transforming Kennedy Space Center into next-generation spaceport capable of supporting not only SLS but also multiple commercial users. Together, these systems will support human exploration missions into the proving ground of cislunar space and ultimately to Mars. SLS will deliver a near-term heavy-lift capability for the nation with its 70 metric ton (t) Block 1 configuration, and will then evolve to an ultimate capability of 130 t. The SLS program marked a major milestone with the successful completion of the Critical Design Review in which detailed designs were reviewed and subsequently approved for proceeding with full-scale production. This marks the first time an exploration class vehicle has passed that major milestone since the Saturn V vehicle launched astronauts in the 1960s during the Apollo program. Each element of the vehicle now has flight hardware in production in support of the initial flight of the SLS -- Exploration Mission-1 (EM-1), an un-crewed mission to orbit the moon and return. Encompassing hardware qualification, structural testing to validate hardware compliance and analytical modeling, progress in on track to meet the initial targeted launch date in 2018. In Utah and Mississippi, booster and engine testing are verifying upgrades made to proven shuttle hardware. At Michoud Assembly Facility in Louisiana, the world's largest spacecraft welding tool is producing tanks for the SLS core stage. This paper will particularly focus on work taking place at Marshall Space Flight Center (MSFC) and United Launch Alliance in Alabama, where upper stage and adapter elements of the vehicle are being constructed and tested. Providing the Orion crew capsule/launch vehicle interface and in-space propulsion via a cryogenic upper stage, the Spacecraft/Payload Integration and Evolution (SPIE) Element serves a key role in achieving SLS goals and objectives. The SPIE element marked a major milestone in 2014 with the first flight of original SLS hardware, the Orion Stage Adapter (OSA) which was used on Exploration Flight Test-1 with a design that will be used again on EM-1. Construction is already underway on the EM-1 Interim Cryogenic Propulsion Stage (ICPS), an in-space stage derived from the Delta Cryogenic Second Stage. Manufacture of the Orion Stage Adapter and the Launch Vehicle Stage Adapter is set to begin at the Friction Stir Facility located at MSFC while structural test articles are either completed (OSA) or nearing completion (Launch Vehicle Stage Adapter). An overview is provided of the launch vehicle capabilities, with a specific focus on SPIE Element qualification/testing progress, as well as efforts to provide access to deep space regions currently not available to the science community through a secondary payload capability utilizing CubeSat-class satellites.

  12. Sentinel Lymph Node Biopsy in Breast Cancer: A Clinical Review and Update

    PubMed Central

    Haji, Altaf; Battoo, Azhar; Qurieshi, Mariya; Mir, Wahid; Shah, Mudasir

    2017-01-01

    Sentinel lymph node biopsy has become a standard staging tool in the surgical management of breast cancer. The positive impact of sentinel lymph node biopsy on postoperative negative outcomes in breast cancer patients, without compromising the oncological outcomes, is its major advantage. It has evolved over the last few decades and has proven its utility beyond early breast cancer. Its applicability and efficacy in patients with clinically positive axilla who have had a complete clinical response after neoadjuvant chemotherapy is being aggressively evaluated at present. This article discusses how sentinel lymph node biopsy has evolved and is becoming a useful tool in new clinical scenarios of breast cancer management. PMID:28970846

  13. Sentinel Lymph Node Biopsy in Breast Cancer: A Clinical Review and Update.

    PubMed

    Zahoor, Sheikh; Haji, Altaf; Battoo, Azhar; Qurieshi, Mariya; Mir, Wahid; Shah, Mudasir

    2017-09-01

    Sentinel lymph node biopsy has become a standard staging tool in the surgical management of breast cancer. The positive impact of sentinel lymph node biopsy on postoperative negative outcomes in breast cancer patients, without compromising the oncological outcomes, is its major advantage. It has evolved over the last few decades and has proven its utility beyond early breast cancer. Its applicability and efficacy in patients with clinically positive axilla who have had a complete clinical response after neoadjuvant chemotherapy is being aggressively evaluated at present. This article discusses how sentinel lymph node biopsy has evolved and is becoming a useful tool in new clinical scenarios of breast cancer management.

  14. Adaptive Instrument Module: Space Instrument Controller "Brain" through Programmable Logic Devices

    NASA Technical Reports Server (NTRS)

    Darrin, Ann Garrison; Conde, Richard; Chern, Bobbie; Luers, Phil; Jurczyk, Steve; Mills, Carl; Day, John H. (Technical Monitor)

    2001-01-01

    The Adaptive Instrument Module (AIM) will be the first true demonstration of reconfigurable computing with field-programmable gate arrays (FPGAs) in space, enabling the 'brain' of the system to evolve or adapt to changing requirements. In partnership with NASA Goddard Space Flight Center and the Australian Cooperative Research Centre for Satellite Systems (CRC-SS), APL has built the flight version to be flown on the Australian university-class satellite FEDSAT. The AIM provides satellites the flexibility to adapt to changing mission requirements by reconfiguring standardized processing hardware rather than incurring the large costs associated with new builds. This ability to reconfigure the processing in response to changing mission needs leads to true evolveable computing, wherein the instrument 'brain' can learn from new science data in order to perform state-of-the-art data processing. The development of the AIM is significant in its enormous potential to reduce total life-cycle costs for future space exploration missions. The advent of RAM-based FPGAs whose configuration can be changed at any time has enabled the development of the AIM for processing tasks that could not be performed in software. The use of the AIM enables reconfiguration of the FPGA circuitry while the spacecraft is in flight, with many accompanying advantages. The AIM demonstrates the practicalities of using reconfigurable computing hardware devices by conducting a series of designed experiments. These include the demonstration of implementing data compression, data filtering, and communication message processing and inter-experiment data computation. The second generation is the Adaptive Processing Template (ADAPT) which is further described in this paper. The next step forward is to make the hardware itself adaptable and the ADAPT pursues this challenge by developing a reconfigurable module that will be capable of functioning efficiently in various applications. ADAPT will take advantage of radiation tolerant RAM-based field programmable gate array (FPGA) technology to develop a reconfigurable processor that combines the flexibility of a general purpose processor running software with the performance of application specific processing hardware for a variety of high performance computing applications.

  15. Solid Rocket Booster (SRB) - Evolution and Lessons Learned During the Shuttle Program

    NASA Technical Reports Server (NTRS)

    Kanner, Howard S.; Freeland, Donna M.; Olson, Derek T.; Wood, T. David; Vaccaro, Mark V.

    2011-01-01

    The Solid Rocket Booster (SRB) element integrates all the subsystems needed for ascent flight, entry, and recovery of the combined Booster and Motor system. These include the structures, avionics, thrust vector control, pyrotechnic, range safety, deceleration, thermal protection, and retrieval systems. This represents the only human-rated, recoverable and refurbishable solid rocket ever developed and flown. Challenges included subsystem integration, thermal environments and severe loads (including water impact), sometimes resulting in hardware attrition. Several of the subsystems evolved during the program through design changes. These included the thermal protection system, range safety system, parachute/recovery system, and others. Obsolescence issues occasionally required component recertification. Because the system was recovered, the SRB was ideal for data and imagery acquisition, which proved essential for understanding loads and system response. The three main parachutes that lower the SRBs to the ocean are the largest parachutes ever designed, and the SRBs are the largest structures ever to be lowered by parachutes. SRB recovery from the ocean was a unique process and represented a significant operational challenge; requiring personnel, facilities, transportation, and ground support equipment. The SRB element achieved reliability via extensive system testing and checkout, redundancy management, and a thorough postflight assessment process. Assembly and integration of the booster subsystems was a unique process and acceptance testing of reused hardware components was required for each build. Extensive testing was done to assure hardware functionality at each level of stage integration. Because the booster element is recoverable, subsystems were available for inspection and testing postflight, unique to the Shuttle launch vehicle. Problems were noted and corrective actions were implemented as needed. The postflight assessment process was quite detailed and a significant portion of flight operations. The SRBs provided fully redundant critical systems including thrust vector control, mission critical pyrotechnics, avionics, and parachute recovery system. The design intent was to lift off with full redundancy. On occasion, the redundancy management scheme was needed during flight operations. This paper describes some of the design challenges, how the design evolved with time, and key areas where hardware reusability contributed to improved system level understanding.

  16. USDI DCS technical support: Mississippi Test Facility

    NASA Technical Reports Server (NTRS)

    Preble, D. M.

    1975-01-01

    The objective of the technical support effort is to provide hardware and data processing support to DCS users so that application of the system may be simply and effectively implemented. Technical support at Mississippi Test Facility (MTF) is concerned primarily with on-site hardware. The first objective of the DCP hardware support was to assure that standard measuring apparatus and techniques used by the USGS could be adapted to the DCS. The second objective was to try to standardize the miscellaneous variety of parameters into a standard instrument set. The third objective was to provide the necessary accessories to simplify the use and complement the capabilities of the DCP. The standard USGS sites have been interfaced and are presently operating. These sites are stream gauge, ground water level and line operated quality of water. Evapotranspiration, meteorological and battery operated quality of water sites are planned for near future DCP operation. Three accessories which are under test or development are the Chu antenna, solar power supply and add-on memory. The DCP has proven to be relatively easy to interface with many monitors. The large antenna is awkward to install and transport. The DCS has met the original requirements well; it has and is proving that an operation, satellite-based data collection system is feasible.

  17. CASIS Fact Sheet: Hardware and Facilities

    NASA Technical Reports Server (NTRS)

    Solomon, Michael R.; Romero, Vergel

    2016-01-01

    Vencore is a proven information solutions, engineering, and analytics company that helps our customers solve their most complex challenges. For more than 40 years, we have designed, developed and delivered mission-critical solutions as our customers' trusted partner. The Engineering Services Contract, or ESC, provides engineering and design services to the NASA organizations engaged in development of new technologies at the Kennedy Space Center. Vencore is the ESC prime contractor, with teammates that include Stinger Ghaffarian Technologies, Sierra Lobo, Nelson Engineering, EASi, and Craig Technologies. The Vencore team designs and develops systems and equipment to be used for the processing of space launch vehicles, spacecraft, and payloads. We perform flight systems engineering for spaceflight hardware and software; develop technologies that serve NASA's mission requirements and operations needs for the future. Our Flight Payload Support (FPS) team at Kennedy Space Center (KSC) provides engineering, development, and certification services as well as payload integration and management services to NASA and commercial customers. Our main objective is to assist principal investigators (PIs) integrate their science experiments into payload hardware for research aboard the International Space Station (ISS), commercial spacecraft, suborbital vehicles, parabolic flight aircrafts, and ground-based studies. Vencore's FPS team is AS9100 certified and a recognized implementation partner for the Center for Advancement of Science in Space (CASIS

  18. The TJO-OAdM robotic observatory: OpenROCS and dome control

    NASA Astrophysics Data System (ADS)

    Colomé, Josep; Francisco, Xavier; Ribas, Ignasi; Casteels, Kevin; Martín, Jonatan

    2010-07-01

    The Telescope Joan Oró at the Montsec Astronomical Observatory (TJO - OAdM) is a small-class observatory working in completely unattended control. There are key problems to solve when a robotic control is envisaged, both on hardware and software issues. We present the OpenROCS (ROCS stands for Robotic Observatory Control System), an open source platform developed for the robotic control of the TJO - OAdM and similar astronomical observatories. It is a complex software architecture, composed of several applications for hardware control, event handling, environment monitoring, target scheduling, image reduction pipeline, etc. The code is developed in Java, C++, Python and Perl. The software infrastructure used is based on the Internet Communications Engine (Ice), an object-oriented middleware that provides object-oriented remote procedure call, grid computing, and publish/subscribe functionality. We also describe the subsystem in charge of the dome control: several hardware and software elements developed to specially protect the system at this identified single point of failure. It integrates a redundant control and a rain detector signal for alarm triggering and it responds autonomously in case communication with any of the control elements is lost (watchdog functionality). The self-developed control software suite (OpenROCS) and dome control system have proven to be highly reliable.

  19. n/a

    NASA Image and Video Library

    2007-08-09

    Under the goals of the Vision for Space Exploration, Ares I is a chief component of the cost-effective space transportation infrastructure being developed by NASA's Constellation Program. This transportation system will safely and reliably carry human explorers back to the moon, and then onward to Mars and other destinations in the solar system. The Ares I effort includes multiple project element teams at NASA centers and contract organizations around the nation, and is managed by the Exploration Launch Projects Office at NASA's Marshall Space Flight Center (MFSC). ATK Launch Systems near Brigham City, Utah, is the prime contractor for the first stage booster. ATK's subcontractor, United Space Alliance of Houston, is designing, developing and testing the parachutes at its facilities at NASA's Kennedy Space Center in Florida. NASA's Johnson Space Center in Houston hosts the Constellation Program and Orion Crew Capsule Project Office and provides test instrumentation and support personnel. Together, these teams are developing vehicle hardware, evolving proven technologies, and testing components and systems. Their work builds on powerful, reliable space shuttle propulsion elements and nearly a half-century of NASA space flight experience and technological advances. Ares I is an inline, two-stage rocket configuration topped by the Crew Exploration Vehicle, its service module, and a launch abort system. This HD video image depicts friction stir welding used in manufacturing aluminum panels that will fabricate the Ares I upper stage barrel. The aluminum panels are subjected to confidence panel tests during which the bent aluminum is stressed to breaking point and thoroughly examined. The panels are manufactured by AMRO Manufacturing located in El Monte, California. (Highest resolution available)

  20. Launch Vehicles

    NASA Image and Video Library

    2007-08-09

    Under the goals of the Vision for Space Exploration, Ares I is a chief component of the cost-effective space transportation infrastructure being developed by NASA's Constellation Program. This transportation system will safely and reliably carry human explorers back to the moon, and then onward to Mars and other destinations in the solar system. The Ares I effort includes multiple project element teams at NASA centers and contract organizations around the nation, and is managed by the Exploration Launch Projects Office at NASA's Marshall Space Flight Center (MFSC). ATK Launch Systems near Brigham City, Utah, is the prime contractor for the first stage booster. ATK's subcontractor, United Space Alliance of Houston, is designing, developing and testing the parachutes at its facilities at NASA's Kennedy Space Center in Florida. NASA's Johnson Space Center in Houston hosts the Constellation Program and Orion Crew Capsule Project Office and provides test instrumentation and support personnel. Together, these teams are developing vehicle hardware, evolving proven technologies, and testing components and systems. Their work builds on powerful, reliable space shuttle propulsion elements and nearly a half-century of NASA space flight experience and technological advances. Ares I is an inline, two-stage rocket configuration topped by the Crew Exploration Vehicle, its service module, and a launch abort system. This HD video image depicts friction stir welding used in manufacturing aluminum panels that will fabricate the Ares I upper stage barrel. The panels are subjected to confidence tests in which the bent aluminum is stressed to breaking point and thoroughly examined. The panels are manufactured by AMRO Manufacturing located in El Monte, California. (Highest resolution available)

  1. n/a

    NASA Image and Video Library

    2007-08-09

    Under the goals of the Vision for Space Exploration, Ares I is a chief component of the cost-effective space transportation infrastructure being developed by NASA's Constellation Program. This transportation system will safely and reliably carry human explorers back to the moon, and then onward to Mars and other destinations in the solar system. The Ares I effort includes multiple project element teams at NASA centers and contract organizations around the nation, and is managed by the Exploration Launch Projects Office at NASA's Marshall Space Flight Center (MFSC). ATK Launch Systems near Brigham City, Utah, is the prime contractor for the first stage booster. ATK's subcontractor, United Space Alliance of Houston, is designing, developing and testing the parachutes at its facilities at NASA's Kennedy Space Center in Florida. NASA's Johnson Space Center in Houston hosts the Constellation Program and Orion Crew Capsule Project Office and provides test instrumentation and support personnel. Together, these teams are developing vehicle hardware, evolving proven technologies, and testing components and systems. Their work builds on powerful, reliable space shuttle propulsion elements and nearly a half-century of NASA space flight experience and technological advances. Ares I is an inline, two-stage rocket configuration topped by the Crew Exploration Vehicle, its service module, and a launch abort system. This HD video image depicts a manufactured aluminum panel that will be used to fabricate the Ares I upper stage barrel, undergoing a confidence panel test. In this test, the bent aluminum is stressed to breaking point and thoroughly examined. The panels are manufactured by AMRO Manufacturing located in El Monte, California.

  2. Launch Vehicles

    NASA Image and Video Library

    2007-08-09

    Under the goals of the Vision for Space Exploration, Ares I is a chief component of the cost-effective space transportation infrastructure being developed by NASA's Constellation Program. This transportation system will safely and reliably carry human explorers back to the moon, and then onward to Mars and other destinations in the solar system. The Ares I effort includes multiple project element teams at NASA centers and contract organizations around the nation, and is managed by the Exploration Launch Projects Office at NASA's Marshall Space Flight Center (MFSC). ATK Launch Systems near Brigham City, Utah, is the prime contractor for the first stage booster. ATK's subcontractor, United Space Alliance of Houston, is designing, developing and testing the parachutes at its facilities at NASA's Kennedy Space Center in Florida. NASA's Johnson Space Center in Houston hosts the Constellation Program and Orion Crew Capsule Project Office and provides test instrumentation and support personnel. Together, these teams are developing vehicle hardware, evolving proven technologies, and testing components and systems. Their work builds on powerful, reliable space shuttle propulsion elements and nearly a half-century of NASA space flight experience and technological advances. Ares I is an inline, two-stage rocket configuration topped by the Crew Exploration Vehicle, its service module, and a launch abort system. This HD video image, depicts a manufactured aluminum panel, that will be used to fabricate the Ares I upper stage barrel, undergoing a confidence panel test. In this test, the bent aluminum is stressed to breaking point and thoroughly examined. The panels are manufactured by AMRO Manufacturing located in El Monte, California. (Highest resolution available)

  3. Launch Vehicles

    NASA Image and Video Library

    2007-08-09

    Under the goals of the Vision for Space Exploration, Ares I is a chief component of the cost-effective space transportation infrastructure being developed by NASA's Constellation Program. This transportation system will safely and reliably carry human explorers back to the moon, and then onward to Mars and other destinations in the solar system. The Ares I effort includes multiple project element teams at NASA centers and contract organizations around the nation, and is managed by the Exploration Launch Projects Office at NASA's Marshall Space Flight Center (MFSC). ATK Launch Systems near Brigham City, Utah, is the prime contractor for the first stage booster. ATK's subcontractor, United Space Alliance of Houston, is designing, developing and testing the parachutes at its facilities at NASA's Kennedy Space Center in Florida. NASA's Johnson Space Center in Houston hosts the Constellation Program and Orion Crew Capsule Project Office and provides test instrumentation and support personnel. Together, these teams are developing vehicle hardware, evolving proven technologies, and testing components and systems. Their work builds on powerful, reliable space shuttle propulsion elements and nearly a half-century of NASA space flight experience and technological advances. Ares I is an inline, two-stage rocket configuration topped by the Crew Exploration Vehicle, its service module, and a launch abort system. This HD video image depicts a manufactured aluminum panel, that will fabricate the Ares I upper stage barrel, undergoing a confidence panel test. In this test, the bent aluminum is stressed to breaking point and thoroughly examined. The panels are manufactured by AMRO Manufacturing located in El Monte, California. (Highest resolution available)

  4. Launch Vehicles

    NASA Image and Video Library

    2006-08-09

    Under the goals of the Vision for Space Exploration, Ares I is a chief component of the cost-effective space transportation infrastructure being developed by NASA's Constellation Program. This transportation system will safely and reliably carry human explorers back to the moon, and then onward to Mars and other destinations in the solar system. The Ares I effort includes multiple project element teams at NASA centers and contract organizations around the nation, and is managed by the Exploration Launch Projects Office at NASA's Marshall Space Flight Center (MFSC). ATK Launch Systems near Brigham City, Utah, is the prime contractor for the first stage booster. ATK's subcontractor, United Space Alliance of Houston, is designing, developing and testing the parachutes at its facilities at NASA's Kennedy Space Center in Florida. NASA's Johnson Space Center in Houston hosts the Constellation Program and Orion Crew Capsule Project Office and provides test instrumentation and support personnel. Together, these teams are developing vehicle hardware, evolving proven technologies, and testing components and systems. Their work builds on powerful, reliable space shuttle propulsion elements and nearly a half-century of NASA space flight experience and technological advances. Ares I is an inline, two-stage rocket configuration topped by the Crew Exploration Vehicle, its service module, and a launch abort system. This HD video image depicts a manufactured aluminum panel, that will fabricate the Ares I upper stage barrel, undergoing a confidence panel test. In this test, bent aluminum is stressed to breaking point and thoroughly examined. The panels are manufactured by AMRO Manufacturing located in El Monte, California. (Highest resolution available)

  5. Launch Vehicles

    NASA Image and Video Library

    2006-08-08

    Under the goals of the Vision for Space Exploration, Ares I is a chief component of the cost-effective space transportation infrastructure being developed by NASA's Constellation Program. This transportation system will safely and reliably carry human explorers back to the moon, and then onward to Mars and other destinations in the solar system. The Ares I effort includes multiple project element teams at NASA centers and contract organizations around the nation, and is managed by the Exploration Launch Projects Office at NASA's Marshall Space Flight Center (MFSC). ATK Launch Systems near Brigham City, Utah, is the prime contractor for the first stage booster. ATK's subcontractor, United Space Alliance of Houston, is designing, developing and testing the parachutes at its facilities at NASA's Kennedy Space Center in Florida. NASA's Johnson Space Center in Houston hosts the Constellation Program and Orion Crew Capsule Project Office and provides test instrumentation and support personnel. Together, these teams are developing vehicle hardware, evolving proven technologies, and testing components and systems. Their work builds on powerful, reliable space shuttle propulsion elements and nearly a half-century of NASA space flight experience and technological advances. Ares I is an inline, two-stage rocket configuration topped by the Crew Exploration Vehicle, its service module, and a launch abort system. This HD video image depicts a manufactured aluminum panel that will be used to fabricate the Ares I upper stage barrel, undergoing a confidence panel test. In this test, the bent aluminum is stressed to breaking point and thoroughly examined. The panels are manufactured by AMRO Manufacturing located in El Monte, California. (Highest resolution available)

  6. Stir Friction Welding Used in Ares I Upper Stage Fabrication

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Under the goals of the Vision for Space Exploration, Ares I is a chief component of the cost-effective space transportation infrastructure being developed by NASA's Constellation Program. This transportation system will safely and reliably carry human explorers back to the moon, and then onward to Mars and other destinations in the solar system. The Ares I effort includes multiple project element teams at NASA centers and contract organizations around the nation, and is managed by the Exploration Launch Projects Office at NASA's Marshall Space Flight Center (MFSC). ATK Launch Systems near Brigham City, Utah, is the prime contractor for the first stage booster. ATK's subcontractor, United Space Alliance of Houston, is designing, developing and testing the parachutes at its facilities at NASA's Kennedy Space Center in Florida. NASA's Johnson Space Center in Houston hosts the Constellation Program and Orion Crew Capsule Project Office and provides test instrumentation and support personnel. Together, these teams are developing vehicle hardware, evolving proven technologies, and testing components and systems. Their work builds on powerful, reliable space shuttle propulsion elements and nearly a half-century of NASA space flight experience and technological advances. Ares I is an inline, two-stage rocket configuration topped by the Crew Exploration Vehicle, its service module, and a launch abort system. This HD video image depicts friction stir welding used in manufacturing aluminum panels that will fabricate the Ares I upper stage barrel. The panels are subjected to confidence tests in which the bent aluminum is stressed to breaking point and thoroughly examined. The panels are manufactured by AMRO Manufacturing located in El Monte, California. (Highest resolution available)

  7. Flexible software architecture for user-interface and machine control in laboratory automation.

    PubMed

    Arutunian, E B; Meldrum, D R; Friedman, N A; Moody, S E

    1998-10-01

    We describe a modular, layered software architecture for automated laboratory instruments. The design consists of a sophisticated user interface, a machine controller and multiple individual hardware subsystems, each interacting through a client-server architecture built entirely on top of open Internet standards. In our implementation, the user-interface components are built as Java applets that are downloaded from a server integrated into the machine controller. The user-interface client can thereby provide laboratory personnel with a familiar environment for experiment design through a standard World Wide Web browser. Data management and security are seamlessly integrated at the machine-controller layer using QNX, a real-time operating system. This layer also controls hardware subsystems through a second client-server interface. This architecture has proven flexible and relatively easy to implement and allows users to operate laboratory automation instruments remotely through an Internet connection. The software architecture was implemented and demonstrated on the Acapella, an automated fluid-sample-processing system that is under development at the University of Washington.

  8. Internet-based videoconferencing and data collaboration for the imaging community.

    PubMed

    Poon, David P; Langkals, John W; Giesel, Frederik L; Knopp, Michael V; von Tengg-Kobligk, Hendrik

    2011-01-01

    Internet protocol-based digital data collaboration with videoconferencing is not yet well utilized in the imaging community. Videoconferencing, combined with proven low-cost solutions, can provide reliable functionality and speed, which will improve rapid, time-saving, and cost-effective communications, within large multifacility institutions or globally with the unlimited reach of the Internet. The aim of this project was to demonstrate the implementation of a low-cost hardware and software setup that facilitates global data collaboration using WebEx and GoToMeeting Internet protocol-based videoconferencing software. Both products' features were tested and evaluated for feasibility across 2 different Internet networks, including a video quality and recording assessment. Cross-compatibility with an Apple OS is also noted in the evaluations. Departmental experiences with WebEx pertaining to clinical trials are also described. Real-time remote presentation of dynamic data was generally consistent across platforms. A reliable and inexpensive hardware and software setup for complete Internet-based data collaboration/videoconferencing can be achieved.

  9. Verification Test of Automated Robotic Assembly of Space Truss Structures

    NASA Technical Reports Server (NTRS)

    Rhodes, Marvin D.; Will, Ralph W.; Quach, Cuong C.

    1995-01-01

    A multidisciplinary program has been conducted at the Langley Research Center to develop operational procedures for supervised autonomous assembly of truss structures suitable for large-aperture antennas. The hardware and operations required to assemble a 102-member tetrahedral truss and attach 12 hexagonal panels were developed and evaluated. A brute-force automation approach was used to develop baseline assembly hardware and software techniques. However, as the system matured and operations were proven, upgrades were incorporated and assessed against the baseline test results. These upgrades included the use of distributed microprocessors to control dedicated end-effector operations, machine vision guidance for strut installation, and the use of an expert system-based executive-control program. This paper summarizes the developmental phases of the program, the results of several assembly tests, and a series of proposed enhancements. No problems that would preclude automated in-space assembly or truss structures have been encountered. The test system was developed at a breadboard level and continued development at an enhanced level is warranted.

  10. An Evolvable Multi-Agent Approach to Space Operations Engineering

    NASA Technical Reports Server (NTRS)

    Mandutianu, Sanda; Stoica, Adrian

    1999-01-01

    A complex system of spacecraft and ground tracking stations, as well as a constellation of satellites or spacecraft, has to be able to reliably withstand sudden environment changes, resource fluctuations, dynamic resource configuration, limited communication bandwidth, etc., while maintaining the consistency of the system as a whole. It is not known in advance when a change in the environment might occur or when a particular exchange will happen. A higher degree of sophistication for the communication mechanisms between different parts of the system is required. The actual behavior has to be determined while the system is performing and the course of action can be decided at the individual level. Under such circumstances, the solution will highly benefit from increased on-board and on the ground adaptability and autonomy. An evolvable architecture based on intelligent agents that communicate and cooperate with each other can offer advantages in this direction. This paper presents an architecture of an evolvable agent-based system (software and software/hardware hybrids) as well as some plans for further implementation.

  11. ICESat-2 laser technology development

    NASA Astrophysics Data System (ADS)

    Edwards, Ryan; Sawruk, Nick W.; Hovis, Floyd E.; Burns, Patrick; Wysocki, Theodore; Rudd, Joe; Walters, Brooke; Fakhoury, Elias; Prisciandaro, Vincent

    2013-09-01

    A number of ICESat-2 system requirements drove the technology evolution and the system architecture for the laser transmitter Fibertek has developed for the mission.. These requirements include the laser wall plug efficiency, laser reliability, high PRF (10kHz), short-pulse (<1.5ns), relatively narrow spectral line-width, and wave length tunability. In response to these requirements Fibertek developed a frequency-doubled, master oscillator/power amplifier (MOPA) laser that incorporates direct pumped diode pumped Nd:YVO4 as the gain media, Another guiding force in the system design has been extensive hardware life testing that Fibertek has completed. This ongoing hardware testing and development evolved the system from the original baseline brass board design to the more robust flight laser system. The final design meets or exceeds all NASA requirements and is scalable to support future mission requirements.

  12. Space Station

    NASA Image and Video Library

    1971-01-01

    This is an artist's concept of the Research and Applications Modules (RAM). Evolutionary growth was an important consideration in space station plarning, and another project was undertaken in 1971 to facilitate such growth. The RAM study, conducted through a Marshall Space Flight Center contract with General Dynamics Convair Aerospace, resulted in the conceptualization of a series of RAM payload carrier-sortie laboratories, pallets, free-flyers, and payload and support modules. The study considered two basic manned systems. The first would use RAM hardware for sortie mission, where laboratories were carried into space and remained attached to the Shuttle for operational periods up to 7 days. The second envisioned a modular space station capability that could be evolved by mating RAM modules to the space station core configuration. The RAM hardware was to be built by Europeans, thus fostering international participation in the space program.

  13. Build and Execute Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guan, Qiang

    At exascale, the challenge becomes to develop applications that run at scale and use exascale platforms reliably, efficiently, and flexibly. Workflows become much more complex because they must seamlessly integrate simulation and data analytics. They must include down-sampling, post-processing, feature extraction, and visualization. Power and data transfer limitations require these analysis tasks to be run in-situ or in-transit. We expect successful workflows will comprise multiple linked simulations along with tens of analysis routines. Users will have limited development time at scale and, therefore, must have rich tools to develop, debug, test, and deploy applications. At this scale, successful workflows willmore » compose linked computations from an assortment of reliable, well-defined computation elements, ones that can come and go as required, based on the needs of the workflow over time. We propose a novel framework that utilizes both virtual machines (VMs) and software containers to create a workflow system that establishes a uniform build and execution environment (BEE) beyond the capabilities of current systems. In this environment, applications will run reliably and repeatably across heterogeneous hardware and software. Containers, both commercial (Docker and Rocket) and open-source (LXC and LXD), define a runtime that isolates all software dependencies from the machine operating system. Workflows may contain multiple containers that run different operating systems, different software, and even different versions of the same software. We will run containers in open-source virtual machines (KVM) and emulators (QEMU) so that workflows run on any machine entirely in user-space. On this platform of containers and virtual machines, we will deliver workflow software that provides services, including repeatable execution, provenance, checkpointing, and future proofing. We will capture provenance about how containers were launched and how they interact to annotate workflows for repeatable and partial re-execution. We will coordinate the physical snapshots of virtual machines with parallel programming constructs, such as barriers, to automate checkpoint and restart. We will also integrate with HPC-specific container runtimes to gain access to accelerators and other specialized hardware to preserve native performance. Containers will link development to continuous integration. When application developers check code in, it will automatically be tested on a suite of different software and hardware architectures.« less

  14. Use of Field Programmable Gate Array Technology in Future Space Avionics

    NASA Technical Reports Server (NTRS)

    Ferguson, Roscoe C.; Tate, Robert

    2005-01-01

    Fulfilling NASA's new vision for space exploration requires the development of sustainable, flexible and fault tolerant spacecraft control systems. The traditional development paradigm consists of the purchase or fabrication of hardware boards with fixed processor and/or Digital Signal Processing (DSP) components interconnected via a standardized bus system. This is followed by the purchase and/or development of software. This paradigm has several disadvantages for the development of systems to support NASA's new vision. Building a system to be fault tolerant increases the complexity and decreases the performance of included software. Standard bus design and conventional implementation produces natural bottlenecks. Configuring hardware components in systems containing common processors and DSPs is difficult initially and expensive or impossible to change later. The existence of Hardware Description Languages (HDLs), the recent increase in performance, density and radiation tolerance of Field Programmable Gate Arrays (FPGAs), and Intellectual Property (IP) Cores provides the technology for reprogrammable Systems on a Chip (SOC). This technology supports a paradigm better suited for NASA's vision. Hardware and software production are melded for more effective development; they can both evolve together over time. Designers incorporating this technology into future avionics can benefit from its flexibility. Systems can be designed with improved fault isolation and tolerance using hardware instead of software. Also, these designs can be protected from obsolescence problems where maintenance is compromised via component and vendor availability.To investigate the flexibility of this technology, the core of the Central Processing Unit and Input/Output Processor of the Space Shuttle AP101S Computer were prototyped in Verilog HDL and synthesized into an Altera Stratix FPGA.

  15. An integrated framework for high level design of high performance signal processing circuits on FPGAs

    NASA Astrophysics Data System (ADS)

    Benkrid, K.; Belkacemi, S.; Sukhsawas, S.

    2005-06-01

    This paper proposes an integrated framework for the high level design of high performance signal processing algorithms' implementations on FPGAs. The framework emerged from a constant need to rapidly implement increasingly complicated algorithms on FPGAs while maintaining the high performance needed in many real time digital signal processing applications. This is particularly important for application developers who often rely on iterative and interactive development methodologies. The central idea behind the proposed framework is to dynamically integrate high performance structural hardware description languages with higher level hardware languages in other to help satisfy the dual requirement of high level design and high performance implementation. The paper illustrates this by integrating two environments: Celoxica's Handel-C language, and HIDE, a structural hardware environment developed at the Queen's University of Belfast. On the one hand, Handel-C has been proven to be very useful in the rapid design and prototyping of FPGA circuits, especially control intensive ones. On the other hand, HIDE, has been used extensively, and successfully, in the generation of highly optimised parameterisable FPGA cores. In this paper, this is illustrated in the construction of a scalable and fully parameterisable core for image algebra's five core neighbourhood operations, where fully floorplanned efficient FPGA configurations, in the form of EDIF netlists, are generated automatically for instances of the core. In the proposed combined framework, highly optimised data paths are invoked dynamically from within Handel-C, and are synthesized using HIDE. Although the idea might seem simple prima facie, it could have serious implications on the design of future generations of hardware description languages.

  16. Computer-Automated Evolution of Spacecraft X-Band Antennas

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Homby, Gregory S.; Linden, Derek S.

    2010-01-01

    A document discusses the use of computer- aided evolution in arriving at a design for X-band communication antennas for NASA s three Space Technology 5 (ST5) satellites, which were launched on March 22, 2006. Two evolutionary algorithms, incorporating different representations of the antenna design and different fitness functions, were used to automatically design and optimize an X-band antenna design. A set of antenna designs satisfying initial ST5 mission requirements was evolved by use these algorithms. The two best antennas - one from each evolutionary algorithm - were built. During flight-qualification testing of these antennas, the mission requirements were changed. After minimal changes in the evolutionary algorithms - mostly in the fitness functions - new antenna designs satisfying the changed mission requirements were evolved and within one month of this change, two new antennas were designed and prototypes of the antennas were built and tested. One of these newly evolved antennas was approved for deployment on the ST5 mission, and flight-qualified versions of this design were built and installed on the spacecraft. At the time of writing the document, these antennas were the first computer-evolved hardware in outer space.

  17. On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms.

    PubMed

    Chen, Chunlei; He, Li; Zhang, Huixiang; Zheng, Hao; Wang, Lei

    2017-01-01

    Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions.

  18. Real-World Evolution of Robot Morphologies: A Proof of Concept.

    PubMed

    Jelisavcic, Milan; de Carlo, Matteo; Hupkes, Elte; Eustratiadis, Panagiotis; Orlowski, Jakub; Haasdijk, Evert; Auerbach, Joshua E; Eiben, A E

    2017-01-01

    Evolutionary robotics using real hardware has been almost exclusively restricted to evolving robot controllers, but the technology for evolvable morphologies is advancing quickly. We discuss a proof-of-concept study to demonstrate real robots that can reproduce. Following a general system plan, we implement a robotic habitat that contains all system components in the simplest possible form. We create an initial population of two robots and run a complete life cycle, resulting in a new robot, parented by the first two. Even though the individual steps are simplified to the maximum, the whole system validates the underlying concepts and provides a generic workflow for the creation of more complex incarnations. This hands-on experience provides insights and helps us elaborate on interesting research directions for future development.

  19. Electronic delay ignition module for single bridgewire Apollo standard initiator

    NASA Technical Reports Server (NTRS)

    Ward, R. D.

    1975-01-01

    An engineering model and a qualification model of the EDIM were constructed and tested to Scout flight qualification criteria. The qualification model incorporated design improvements resulting from the engineering model tests. Compatibility with single bridgewire Apollo standard initiator (SBASI) was proven by test firing forty-five (45) SBASI's with worst case voltage and temperature conditions. The EDIM was successfully qualified for Scout flight application with no failures during testing of the qualification unit. Included is a method of implementing the EDIM into Scout vehicle hardware and the ground support equipment necessary to check out the system.

  20. A survey of SAT solver

    NASA Astrophysics Data System (ADS)

    Gong, Weiwei; Zhou, Xu

    2017-06-01

    In Computer Science, the Boolean Satisfiability Problem(SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. SAT is one of the first problems that was proven to be NP-complete, which is also fundamental to artificial intelligence, algorithm and hardware design. This paper reviews the main algorithms of the SAT solver in recent years, including serial SAT algorithms, parallel SAT algorithms, SAT algorithms based on GPU, and SAT algorithms based on FPGA. The development of SAT is analyzed comprehensively in this paper. Finally, several possible directions for the development of the SAT problem are proposed.

  1. Safety considerations in the design and operation of large wind turbines

    NASA Technical Reports Server (NTRS)

    Reilly, D. H.

    1979-01-01

    The engineering and safety techniques used to assure the reliable and safe operation of large wind turbine generators utilizing the Mod 2 Wind Turbine System Program as an example is described. The techniques involve a careful definition of the wind turbine's natural and operating environments, use of proven structural design criteria and analysis techniques, an evaluation of potential failure modes and hazards, and use of a fail safe and redundant component engineering philosophy. The role of an effective quality assurance program, tailored to specific hardware criticality, and the checkout and validation program developed to assure system integrity are described.

  2. Rejoice in unexpected gifts from parrots and butterflies

    NASA Astrophysics Data System (ADS)

    Lakhtakia, Akhlesh

    2016-04-01

    New biological structures usually evolve from gradual modifications of old structures. Sometimes, biological structures contain hidden features, possibly vestigial. In addition to learning about functionalities, mechanisms, and structures readily apparent in nature, one must be alive to hidden features that could be useful. This aspect of engineered biomimicry is exemplified by two optical structures of psittacine and lepidopteran provenances. In both examples, a schemochrome is hidden by pigments.

  3. Enhanced Traceability for Bulk Processing of Sentinel-Derived Information Products

    NASA Astrophysics Data System (ADS)

    Lankester, Thomas; Hubbard, Steven; Knowelden, Richard

    2016-08-01

    The advent of widely available, systematically acquired and advanced Earth observations from the Sentinel platforms is spurring development of a wide range of derived information products. Whilst welcome, this rapid rate of development inevitably leads to some processing instability as algorithms and production steps are required to evolve accordingly. To mitigate this instability, the provenance of EO-derived information products needs to be traceable and transparent.Airbus Defence and Space (Airbus DS) has developed the Airbus Processing Cloud (APC) as a virtualised processing farm for bulk production of EO-derived data and information products. The production control system of the APC transforms internal configuration control information into an INSPIRE metadata file containing a stepwise set of processing steps and data source elements that provide the complete and transparent provenance of each product generated.

  4. Transforming healthcare delivery: Why and how accountable care organizations must evolve.

    PubMed

    Chen, Christopher T; Ackerly, D Clay; Gottlieb, Gary

    2016-09-01

    Accountable care organizations (ACOs) have shown promise in reducing healthcare spending growth, but have proven to be financially unsustainable for many healthcare organizations. Even ACOs with shared savings have experienced overall losses because the shared savings bonuses have not covered the costs of delivering population health. As physicians and former ACO leaders, we believe in the concept of accountable care, but ACOs need to evolve if they are to have a viable future. We propose the novel possibility of allowing ACOs to bill fee-for-service for their population health interventions, a concept we call population health billing. Journal of Hospital Medicine 2016;11:658-661. © 2016 Society of Hospital Medicine. © 2016 Society of Hospital Medicine.

  5. Investigating the Incorporation of Personality Constructs into IMPRINT

    DTIC Science & Technology

    2009-02-01

    early in the system acquisition process. The U.S. Navy took the lead by developing the HARDMAN Comparability Methodology (HCM) to analyze the trade...space between hardware and manpower. Subsequently, the U.S. Army then adapted HCM, renamed HARDMAN I to include a broader range of weapon systems. A...subsequent evolution by the U.S. Army automated the process and was called HARDMAN II. In the mid to late 1980s HARDMAN II evolved, linking MPT to

  6. Interactive Particle Visualization

    NASA Astrophysics Data System (ADS)

    Gribble, Christiaan P.

    Particle-based simulation methods are used to model a wide range of complex phenomena and to solve time-dependent problems of various scales. Effective visualizations of the resulting state will communicate subtle changes in the three-dimensional structure, spatial organization, and qualitative trends within a simulation as it evolves. This chapter discusses two approaches to interactive particle visualization that satisfy these goals: one targeting desktop systems equipped with programmable graphics hardware, and the other targeting moderately sized multicore systems using packet-based ray tracing.

  7. Framework for architecture-independent run-time reconfigurable applications

    NASA Astrophysics Data System (ADS)

    Lehn, David I.; Hudson, Rhett D.; Athanas, Peter M.

    2000-10-01

    Configurable Computing Machines (CCMs) have emerged as a technology with the computational benefits of custom ASICs as well as the flexibility and reconfigurability of general-purpose microprocessors. Significant effort from the research community has focused on techniques to move this reconfigurability from a rapid application development tool to a run-time tool. This requires the ability to change the hardware design while the application is executing and is known as Run-Time Reconfiguration (RTR). Widespread acceptance of run-time reconfigurable custom computing depends upon the existence of high-level automated design tools. Such tools must reduce the designers effort to port applications between different platforms as the architecture, hardware, and software evolves. A Java implementation of a high-level application framework, called Janus, is presented here. In this environment, developers create Java classes that describe the structural behavior of an application. The framework allows hardware and software modules to be freely mixed and interchanged. A compilation phase of the development process analyzes the structure of the application and adapts it to the target platform. Janus is capable of structuring the run-time behavior of an application to take advantage of the memory and computational resources available.

  8. Modeling and Simulation of the Economics of Mining in the Bitcoin Market

    PubMed Central

    Marchesi, Michele

    2016-01-01

    In January 3, 2009, Satoshi Nakamoto gave rise to the “Bitcoin Blockchain”, creating the first block of the chain hashing on his computer’s central processing unit (CPU). Since then, the hash calculations to mine Bitcoin have been getting more and more complex, and consequently the mining hardware evolved to adapt to this increasing difficulty. Three generations of mining hardware have followed the CPU’s generation. They are GPU’s, FPGA’s and ASIC’s generations. This work presents an agent-based artificial market model of the Bitcoin mining process and of the Bitcoin transactions. The goal of this work is to model the economy of the mining process, starting from GPU’s generation, the first with economic significance. The model reproduces some “stylized facts” found in real-time price series and some core aspects of the mining business. In particular, the computational experiments performed can reproduce the unit root property, the fat tail phenomenon and the volatility clustering of Bitcoin price series. In addition, under proper assumptions, they can reproduce the generation of Bitcoins, the hashing capability, the power consumption, and the mining hardware and electrical energy expenditures of the Bitcoin network. PMID:27768691

  9. Power Hardware-in-the-Loop Evaluation of PV Inverter Grid Support on Hawaiian Electric Feeders: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Austin; Prabakar, Kumaraguru; Nagarajan, Adarsh

    As more grid-connected photovoltaic (PV) inverters become compliant with evolving interconnections requirements, there is increased interest from utilities in understanding how to best deploy advanced grid-support functions (GSF) in the field. One efficient and cost-effective method to examine such deployment options is to leverage power hardware-in-the-loop (PHIL) testing methods. Two Hawaiian Electric feeder models were converted to real-time models in the OPAL-RT real-time digital testing platform, and integrated with models of GSF capable PV inverters that were modeled from characterization test data. The integrated model was subsequently used in PHIL testing to evaluate the effects of different fixed power factormore » and volt-watt control settings on voltage regulation of the selected feeders. The results of this study were provided as inputs for field deployment and technical interconnection requirements for grid-connected PV inverters on the Hawaiian Islands.« less

  10. The evolution of image-guided lumbosacral spine surgery.

    PubMed

    Bourgeois, Austin C; Faulkner, Austin R; Pasciak, Alexander S; Bradley, Yong C

    2015-04-01

    Techniques and approaches of spinal fusion have considerably evolved since their first description in the early 1900s. The incorporation of pedicle screw constructs into lumbosacral spine surgery is among the most significant advances in the field, offering immediate stability and decreased rates of pseudarthrosis compared to previously described methods. However, early studies describing pedicle screw fixation and numerous studies thereafter have demonstrated clinically significant sequelae of inaccurate surgical fusion hardware placement. A number of image guidance systems have been developed to reduce morbidity from hardware malposition in increasingly complex spine surgeries. Advanced image guidance systems such as intraoperative stereotaxis improve the accuracy of pedicle screw placement using a variety of surgical approaches, however their clinical indications and clinical impact remain debated. Beginning with intraoperative fluoroscopy, this article describes the evolution of image guided lumbosacral spinal fusion, emphasizing two-dimensional (2D) and three-dimensional (3D) navigational methods.

  11. Learning and optimization with cascaded VLSI neural network building-block chips

    NASA Technical Reports Server (NTRS)

    Duong, T.; Eberhardt, S. P.; Tran, M.; Daud, T.; Thakoor, A. P.

    1992-01-01

    To demonstrate the versatility of the building-block approach, two neural network applications were implemented on cascaded analog VLSI chips. Weights were implemented using 7-b multiplying digital-to-analog converter (MDAC) synapse circuits, with 31 x 32 and 32 x 32 synapses per chip. A novel learning algorithm compatible with analog VLSI was applied to the two-input parity problem. The algorithm combines dynamically evolving architecture with limited gradient-descent backpropagation for efficient and versatile supervised learning. To implement the learning algorithm in hardware, synapse circuits were paralleled for additional quantization levels. The hardware-in-the-loop learning system allocated 2-5 hidden neurons for parity problems. Also, a 7 x 7 assignment problem was mapped onto a cascaded 64-neuron fully connected feedback network. In 100 randomly selected problems, the network found optimal or good solutions in most cases, with settling times in the range of 7-100 microseconds.

  12. ProvenCare: Geisinger's Model for Care Transformation through Innovative Clinical Initiatives and Value Creation.

    PubMed

    2009-04-01

    Geisinger's system of care can be seen as a microcosm of the national delivery of healthcare, with implications for decision makers in other health plans. In this interview, Dr Ronald A. Paulus focuses on Geisinger's unique approach to patient care. In its core, this approach represents a system of quality and value initiatives based on 3 major programs-Proven Health Navigation (medical home); the ProvenCare model; and transitions of care. The goal of such an approach is to optimize disease management by using a rational reimbursement paradigm for appropriate interventions, providing innovative incentives, and engaging patients in their own care as part of any intervention. Dr Paulus explains the reasons why, unlike Geisinger, other stakeholders, including payers, providers, patients, and employers, have no intrinsic reasons to be concerned with quality and value initiatives. In addition, he says, an electronic infrastructure that could be modified as management paradigms evolve is a necessary tool to ensure the healthcare delivery system's ability to adapt to new clinical realities quickly to ensure the continuation of delivering best value for all stakeholders.

  13. Real time mitigation of atmospheric turbulence in long distance imaging using the lucky region fusion algorithm with FPGA and GPU hardware acceleration

    NASA Astrophysics Data System (ADS)

    Jackson, Christopher Robert

    "Lucky-region" fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm selects sharp regions of an image obtained from a series of short exposure frames, and fuses the sharp regions into a final, improved image. In previous research, the LRF algorithm had been implemented on a PC using the C programming language. However, the PC did not have sufficient sequential processing power to handle real-time extraction, processing and reduction required when the LRF algorithm was applied to real-time video from fast, high-resolution image sensors. This thesis describes two hardware implementations of the LRF algorithm to achieve real-time image processing. The first was created with a VIRTEX-7 field programmable gate array (FPGA). The other developed using the graphics processing unit (GPU) of a NVIDIA GeForce GTX 690 video card. The novelty in the FPGA approach is the creation of a "black box" LRF video processing system with a general camera link input, a user controller interface, and a camera link video output. We also describe a custom hardware simulation environment we have built to test the FPGA LRF implementation. The advantage of the GPU approach is significantly improved development time, integration of image stabilization into the system, and comparable atmospheric turbulence mitigation.

  14. Space Launch System Development Status

    NASA Technical Reports Server (NTRS)

    Lyles, Garry

    2014-01-01

    Development of NASA's Space Launch System (SLS) heavy lift rocket is shifting from the formulation phase into the implementation phase in 2014, a little more than three years after formal program approval. Current development is focused on delivering a vehicle capable of launching 70 metric tons (t) into low Earth orbit. This "Block 1" configuration will launch the Orion Multi-Purpose Crew Vehicle (MPCV) on its first autonomous flight beyond the Moon and back in December 2017, followed by its first crewed flight in 2021. SLS can evolve to a130-t lift capability and serve as a baseline for numerous robotic and human missions ranging from a Mars sample return to delivering the first astronauts to explore another planet. Benefits associated with its unprecedented mass and volume include reduced trip times and simplified payload design. Every SLS element achieved significant, tangible progress over the past year. Among the Program's many accomplishments are: manufacture of Core Stage test panels; testing of Solid Rocket Booster development hardware including thrust vector controls and avionics; planning for testing the RS-25 Core Stage engine; and more than 4,000 wind tunnel runs to refine vehicle configuration, trajectory, and guidance. The Program shipped its first flight hardware - the Multi-Purpose Crew Vehicle Stage Adapter (MSA) - to the United Launch Alliance for integration with the Delta IV heavy rocket that will launch an Orion test article in 2014 from NASA's Kennedy Space Center. Objectives of this Earth-orbit flight include validating the performance of Orion's heat shield and the MSA design, which will be manufactured again for SLS missions to deep space. The Program successfully completed Preliminary Design Review in 2013 and Key Decision Point C in early 2014. NASA has authorized the Program to move forward to Critical Design Review, scheduled for 2015 and a December 2017 first launch. The Program's success to date is due to prudent use of proven technology, infrastructure, and workforce from the Saturn and Space Shuttle programs, a streamlined management approach, and judicious use of new technologies. The result is a safe, affordable, sustainable, and evolutionary path to development of an unprecedented capability for future missions across the solar system. In an environment of economic challenges, the nationwide SLS team continues to meet ambitious budget and schedule targets. This paper will discuss SLS program and technical accomplishments over the past year and provide a look at the milestones and challenges ahead.

  15. NASA's Space Launch System Development Status

    NASA Technical Reports Server (NTRS)

    Lyles, Garry

    2014-01-01

    Development of the National Aeronautics and Space Administration's (NASA's) Space Launch System (SLS) heavy lift rocket is shifting from the formulation phase into the implementation phase in 2014, a little more than 3 years after formal program establishment. Current development is focused on delivering a vehicle capable of launching 70 metric tons (t) into low Earth orbit. This "Block 1" configuration will launch the Orion Multi-Purpose Crew Vehicle (MPCV) on its first autonomous flight beyond the Moon and back in December 2017, followed by its first crewed flight in 2021. SLS can evolve to a130t lift capability and serve as a baseline for numerous robotic and human missions ranging from a Mars sample return to delivering the first astronauts to explore another planet. Benefits associated with its unprecedented mass and volume include reduced trip times and simplified payload design. Every SLS element achieved significant, tangible progress over the past year. Among the Program's many accomplishments are: manufacture of core stage test barrels and domes; testing of Solid Rocket Booster development hardware including thrust vector controls and avionics; planning for RS- 25 core stage engine testing; and more than 4,000 wind tunnel runs to refine vehicle configuration, trajectory, and guidance. The Program shipped its first flight hardware - the Multi-Purpose Crew Vehicle Stage Adapter (MSA) - to the United Launch Alliance for integration with the Delta IV heavy rocket that will launch an Orion test article in 2014 from NASA's Kennedy Space Center. The Program successfully completed Preliminary Design Review in 2013 and will complete Key Decision Point C in 2014. NASA has authorized the Program to move forward to Critical Design Review, scheduled for 2015 and a December 2017 first launch. The Program's success to date is due to prudent use of proven technology, infrastructure, and workforce from the Saturn and Space Shuttle programs, a streamlined management approach, and judicious use of new technologies. The result is a safe, affordable, sustainable, and evolutionary path to development of an unprecedented capability for future missions across the solar system. In an environment of economic challenges, the nationwide SLS team continues to meet ambitious budget and schedule targets. This paper will discuss SLS Program and technical accomplishments over the past year and provide a look at the milestones and challenges ahead.

  16. Swarming Robot Design, Construction and Software Implementation

    NASA Technical Reports Server (NTRS)

    Stolleis, Karl A.

    2014-01-01

    In this paper is presented an overview of the hardware design, construction overview, software design and software implementation for a small, low-cost robot to be used for swarming robot development. In addition to the work done on the robot, a full simulation of the robotic system was developed using Robot Operating System (ROS) and its associated simulation. The eventual use of the robots will be exploration of evolving behaviors via genetic algorithms and builds on the work done at the University of New Mexico Biological Computation Lab.

  17. The evolution of automated launch processing

    NASA Technical Reports Server (NTRS)

    Tomayko, James E.

    1988-01-01

    The NASA Launch Processing System (LPS) to which attention is presently given has arrived at satisfactory solutions for the distributed-computing, good user interface and dissimilar-hardware interface, and automation-related problems that emerge in the specific arena of spacecraft launch preparations. An aggressive effort was made to apply the lessons learned in the 1960s, during the first attempts at automatic launch vehicle checkout, to the LPS. As the Space Shuttle System continues to evolve, the primary contributor to safety and reliability will be the LPS.

  18. Feasibility study of an Integrated Program for Aerospace-vehicle Design (IPAD) system. Volume 4: Design of the IPAD system. Part 1: IPAD system design requirements, phase 1, task 2

    NASA Technical Reports Server (NTRS)

    Garrocq, C. A.; Hurley, M. J.

    1973-01-01

    System requirements, software elements, and hardware equipment required for an IPAD system are defined. An IPAD conceptual design was evolved, a potential user survey was conducted, and work loads for various types of interactive terminals were projected. Various features of major host computing systems were compared, and target systems were selected in order to identify the various elements of software required.

  19. The Symbiotic Relationship between Scientific Workflow and Provenance (Invited)

    NASA Astrophysics Data System (ADS)

    Stephan, E.

    2010-12-01

    The purpose of this presentation is to describe the symbiotic nature of scientific workflows and provenance. We will also discuss the current trends and real world challenges facing these two distinct research areas. Although motivated differently, the needs of the international science communities are the glue that binds this relationship together. Understanding and articulating the science drivers to these communities is paramount as these technologies evolve and mature. Originally conceived for managing business processes, workflows are now becoming invaluable assets in both computational and experimental sciences. These reconfigurable, automated systems provide essential technology to perform complex analyses by coupling together geographically distributed disparate data sources and applications. As a result, workflows are capable of higher throughput in a shorter amount of time than performing the steps manually. Today many different workflow products exist; these could include Kepler and Taverna or similar products like MeDICI, developed at PNNL, that are standardized on the Business Process Execution Language (BPEL). Provenance, originating from the French term Provenir “to come from”, is used to describe the curation process of artwork as art is passed from owner to owner. The concept of provenance was adopted by digital libraries as a means to track the lineage of documents while standards such as the DublinCore began to emerge. In recent years the systems science community has increasingly expressed the need to expand the concept of provenance to formally articulate the history of scientific data. Communities such as the International Provenance and Annotation Workshop (IPAW) have formalized a provenance data model. The Open Provenance Model, and the W3C is hosting a provenance incubator group featuring the Proof Markup Language. Although both workflows and provenance have risen from different communities and operate independently, their mutual success is tied together, forming a symbiotic relationship where research and development advances in one effort can provide tremendous benefits to the other. For example, automating provenance extraction within scientific applications is still a relatively new concept; the workflow engine provides the framework to capture application specific operations, inputs, and resulting data. It provides a description of the process history and data flow by wrapping workflow components around the applications and data sources. On the other hand, a lack of cooperation between workflows and provenance can inhibit usefulness of both to science. Blindly tracking the execution history without having a true understanding of what kinds of questions end users may have makes the provenance indecipherable to the target users. Over the past nine years PNNL has been actively involved in provenance research in support of computational chemistry, molecular dynamics, biology, hydrology, and climate. PNNL has also been actively involved in efforts by the international community to develop open standards for provenance and the development of architectures to support provenance capture, storage, and querying. This presentation will provide real world use cases of how provenance and workflow can be leveraged and implemented to meet different needs and the challenges that lie ahead.

  20. Solving the Software Legacy Problem with RISA

    NASA Astrophysics Data System (ADS)

    Ibarra, A.; Gabriel, C.

    2012-09-01

    Nowadays hardware and system infrastructure evolve on time scales much shorter than the typical duration of space astronomy missions. Data processing software capabilities have to evolve to preserve the scientific return during the entire experiment life time. Software preservation is a key issue that has to be tackled before the end of the project to keep the data usable over many years. We present RISA (Remote Interface to Science Analysis) as a solution to decouple data processing software and infrastructure life-cycles, using JAVA applications and web-services wrappers to existing software. This architecture employs embedded SAS in virtual machines assuring a homogeneous job execution environment. We will also present the first studies to reactivate the data processing software of the EXOSAT mission, the first ESA X-ray astronomy mission launched in 1983, using the generic RISA approach.

  1. Cheminformatics and the Semantic Web: adding value with linked data and enhanced provenance

    PubMed Central

    Frey, Jeremy G; Bird, Colin L

    2013-01-01

    Cheminformatics is evolving from being a field of study associated primarily with drug discovery into a discipline that embraces the distribution, management, access, and sharing of chemical data. The relationship with the related subject of bioinformatics is becoming stronger and better defined, owing to the influence of Semantic Web technologies, which enable researchers to integrate heterogeneous sources of chemical, biochemical, biological, and medical information. These developments depend on a range of factors: the principles of chemical identifiers and their role in relationships between chemical and biological entities; the importance of preserving provenance and properly curated metadata; and an understanding of the contribution that the Semantic Web can make at all stages of the research lifecycle. The movements toward open access, open source, and open collaboration all contribute to progress toward the goals of integration. PMID:24432050

  2. Missileborne Artificial Vision System (MAVIS)

    NASA Technical Reports Server (NTRS)

    Andes, David K.; Witham, James C.; Miles, Michael D.

    1994-01-01

    Several years ago when INTEL and China Lake designed the ETANN chip, analog VLSI appeared to be the only way to do high density neural computing. In the last five years, however, digital parallel processing chips capable of performing neural computation functions have evolved to the point of rough equality with analog chips in system level computational density. The Naval Air Warfare Center, China Lake, has developed a real time, hardware and software system designed to implement and evaluate biologically inspired retinal and cortical models. The hardware is based on the Adaptive Solutions Inc. massively parallel CNAPS system COHO boards. Each COHO board is a standard size 6U VME card featuring 256 fixed point, RISC processors running at 20 MHz in a SIMD configuration. Each COHO board has a companion board built to support a real time VSB interface to an imaging seeker, a NTSC camera, and to other COHO boards. The system is designed to have multiple SIMD machines each performing different corticomorphic functions. The system level software has been developed which allows a high level description of corticomorphic structures to be translated into the native microcode of the CNAPS chips. Corticomorphic structures are those neural structures with a form similar to that of the retina, the lateral geniculate nucleus, or the visual cortex. This real time hardware system is designed to be shrunk into a volume compatible with air launched tactical missiles. Initial versions of the software and hardware have been completed and are in the early stages of integration with a missile seeker.

  3. Launch Vehicles

    NASA Image and Video Library

    2007-08-09

    Under the goals of the Vision for Space Exploration, Ares I is a chief component of the cost-effective space transportation infrastructure being developed by NASA's Constellation Program. This transportation system will safely and reliably carry human explorers back to the moon, and then onward to Mars and other destinations in the solar system. The Ares I effort includes multiple project element teams at NASA centers and contract organizations around the nation, and is managed by the Exploration Launch Projects Office at NASA's Marshall Space Flight Center (MFSC). ATK Launch Systems near Brigham City, Utah, is the prime contractor for the first stage booster. ATK's subcontractor, United Space Alliance of Houston, is designing, developing and testing the parachutes at its facilities at NASA's Kennedy Space Center in Florida. NASA's Johnson Space Center in Houston hosts the Constellation Program and Orion Crew Capsule Project Office and provides test instrumentation and support personnel. Together, these teams are developing vehicle hardware, evolving proven technologies, and testing components and systems. Their work builds on powerful, reliable space shuttle propulsion elements and nearly a half-century of NASA space flight experience and technological advances. Ares I is an inline, two-stage rocket configuration topped by the Crew Exploration Vehicle, its service module, and a launch abort system. This HD video image depicts the preparation and placement of a confidence ring for friction stir welding used in manufacturing aluminum panels that will fabricate the Ares I upper stage barrel. The aluminum panels are manufactured and subjected to confidence tests during which the bent aluminum is stressed to breaking point and thoroughly examined. The panels are manufactured by AMRO Manufacturing located in El Monte, California. (Highest resolution available)

  4. Stir Friction Welding Used in Ares I Upper Stage Fabrication

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Under the goals of the Vision for Space Exploration, Ares I is a chief component of the cost-effective space transportation infrastructure being developed by NASA's Constellation Program. This transportation system will safely and reliably carry human explorers back to the moon, and then onward to Mars and other destinations in the solar system. The Ares I effort includes multiple project element teams at NASA centers and contract organizations around the nation, and is managed by the Exploration Launch Projects Office at NASA's Marshall Space Flight Center (MFSC). ATK Launch Systems near Brigham City, Utah, is the prime contractor for the first stage booster. ATK's subcontractor, United Space Alliance of Houston, is designing, developing and testing the parachutes at its facilities at NASA's Kennedy Space Center in Florida. NASA's Johnson Space Center in Houston hosts the Constellation Program and Orion Crew Capsule Project Office and provides test instrumentation and support personnel. Together, these teams are developing vehicle hardware, evolving proven technologies, and testing components and systems. Their work builds on powerful, reliable space shuttle propulsion elements and nearly a half-century of NASA space flight experience and technological advances. Ares I is an inline, two-stage rocket configuration topped by the Crew Exploration Vehicle, its service module, and a launch abort system. This HD video image depicts the preparation and placement of a confidence ring for friction stir welding used in manufacturing aluminum panels that will fabricate the Ares I upper stage barrel. The aluminum panels are manufactured and subjected to confidence tests during which the bent aluminum is stressed to breaking point and thoroughly examined. The panels are manufactured by AMRO Manufacturing located in El Monte, California. (Highest resolution available)

  5. Stir Friction Welding Used in Ares I Upper Stage Fabrication

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Under the goals of the Vision for Space Exploration, Ares I is a chief component of the cost-effective space transportation infrastructure being developed by NASA's Constellation Program. This transportation system will safely and reliably carry human explorers back to the moon, and then onward to Mars and other destinations in the solar system. The Ares I effort includes multiple project element teams at NASA centers and contract organizations around the nation, and is managed by the Exploration Launch Projects Office at NASA's Marshall Space Flight Center (MFSC). ATK Launch Systems near Brigham City, Utah, is the prime contractor for the first stage booster. ATK's subcontractor, United Space Alliance of Houston, is designing, developing and testing the parachutes at its facilities at NASA's Kennedy Space Center in Florida. NASA's Johnson Space Center in Houston hosts the Constellation Program and Orion Crew Capsule Project Office and provides test instrumentation and support personnel. Together, these teams are developing vehicle hardware, evolving proven technologies, and testing components and systems. Their work builds on powerful, reliable space shuttle propulsion elements and nearly a half-century of NASA space flight experience and technological advances. Ares I is an inline, two-stage rocket configuration topped by the Crew Exploration Vehicle, its service module, and a launch abort system. This HD video image depicts friction stir welding used in manufacturing aluminum panels that will fabricate the Ares I upper stage barrel. The aluminum panels are subjected to confidence panel tests during which the bent aluminum is stressed to breaking point and thoroughly examined. The panels are manufactured by AMRO Manufacturing located in El Monte, California. (Highest resolution available)

  6. Launch Vehicles

    NASA Image and Video Library

    2007-09-09

    Under the goals of the Vision for Space Exploration, Ares I is a chief component of the cost-effective space transportation infrastructure being developed by NASA's Constellation Program. This transportation system will safely and reliably carry human explorers back to the moon, and then onward to Mars and other destinations in the solar system. The Ares I effort includes multiple project element teams at NASA centers and contract organizations around the nation, and is managed by the Exploration Launch Projects Office at NASA's Marshall Space Flight Center (MFSC). ATK Launch Systems near Brigham City, Utah, is the prime contractor for the first stage booster. ATK's subcontractor, United Space Alliance of Houston, is designing, developing and testing the parachutes at its facilities at NASA's Kennedy Space Center in Florida. NASA's Johnson Space Center in Houston hosts the Constellation Program and Orion Crew Capsule Project Office and provides test instrumentation and support personnel. Together, these teams are developing vehicle hardware, evolving proven technologies, and testing components and systems. Their work builds on powerful, reliable space shuttle propulsion elements and nearly a half-century of NASA space flight experience and technological advances. Ares I is an inline, two-stage rocket configuration topped by the Crew Exploration Vehicle, its service module, and a launch abort system. In this HD video image, the first stage reentry parachute drop test is conducted at the Yuma, Arizona proving ground. The parachute tests demonstrated a three-stage deployment sequence that included the use of an Orbiter drag chute to properly stage the unfurling of the main chute. The parachute recovery system for Orion will be similar to the system used for Apollo command module landings and include two drogue, three pilot, and three main parachutes. (Highest resolution available)

  7. Launch Vehicles

    NASA Image and Video Library

    2006-09-09

    Under the goals of the Vision for Space Exploration, Ares I is a chief component of the cost-effective space transportation infrastructure being developed by NASA's Constellation Program. This transportation system will safely and reliably carry human explorers back to the moon, and then onward to Mars and other destinations in the solar system. The Ares I effort includes multiple project element teams at NASA centers and contract organizations around the nation, and is managed by the Exploration Launch Projects Office at NASA's Marshall Space Flight Center (MFSC). ATK Launch Systems near Brigham City, Utah, is the prime contractor for the first stage booster. ATK's subcontractor, United Space Alliance of Houston, is designing, developing and testing the parachutes at its facilities at NASA's Kennedy Space Center in Florida. NASA's Johnson Space Center in Houston hosts the Constellation Program and Orion Crew Capsule Project Office and provides test instrumentation and support personnel. Together, these teams are developing vehicle hardware, evolving proven technologies, and testing components and systems. Their work builds on powerful, reliable space shuttle propulsion elements and nearly a half-century of NASA space flight experience and technological advances. Ares I is an inline, two-stage rocket configuration topped by the Crew Exploration Vehicle, its service module, and a launch abort system. In this HD video image, the first stage reentry parachute drop test is conducted at the Yuma, Arizona proving ground. The parachute tests demonstrated a three-stage deployment sequence that included the use of an Orbiter drag chute to properly stage the unfurling of the main chute. The parachute recovery system for Orion will be similar to the system used for Apollo command module landings and include two drogue, three pilot, and three main parachutes. (Highest resolution available)

  8. Launch Vehicles

    NASA Image and Video Library

    2007-09-09

    Under the goals of the Vision for Space Exploration, Ares I is a chief component of the cost-effective space transportation infrastructure being developed by NASA's Constellation Program. This transportation system will safely and reliably carry human explorers back to the moon, and then onward to Mars and other destinations in the solar system. The Ares I effort includes multiple project element teams at NASA centers and contract organizations around the nation, and is managed by the Exploration Launch Projects Office at NASA's Marshall Space Flight Center (MFSC). ATK Launch Systems near Brigham City, Utah, is the prime contractor for the first stage booster. ATK's subcontractor, United Space Alliance of Houston, is designing, developing and testing the parachutes at its facilities at NASA's Kennedy Space Center in Florida. NASA's Johnson Space Center in Houston hosts the Constellation Program and Orion Crew Capsule Project Office and provides test instrumentation and support personnel. Together, these teams are developing vehicle hardware, evolving proven technologies, and testing components and systems. Their work builds on powerful, reliable space shuttle propulsion elements and nearly a half-century of NASA space flight experience and technological advances. Ares I is an inline, two-stage rocket configuration topped by the Crew Exploration Vehicle, its service module, and a launch abort system. The launch vehicle's first stage is a single, five-segment reusable solid rocket booster derived from the Space Shuttle Program's reusable solid rocket motor that burns a specially formulated and shaped solid propellant called polybutadiene acrylonitrile (PBAN). The second or upper stage will be propelled by a J-2X main engine fueled with liquid oxygen and liquid hydrogen. This HD video image depicts a test firing of a 40k subscale J2X injector at MSFC's test stand 115. (Highest resolution available)

  9. Light Isotopes and Trace Organics Analysis of Mars Samples with Mass Spectrometry

    NASA Technical Reports Server (NTRS)

    Mahaffy, P.; Niemann, Hasso (Technical Monitor)

    2001-01-01

    Precision measurement of light isotopes in Mars surface minerals and comparison of this isotopic composition with atmospheric gas and other, well-mixed reservoirs such as surface dust are necessary to understand the history of atmospheric evolution from a possibly warmer and wetter Martian surface to the present state. Atmospheric sources and sinks that set these ratios are volcanism, solar wind sputtering, photochemical processes, and weathering. Measurement of a range of trace organic species with a particular focus on species such as amino acids that are the building blocks of terrestrial life are likewise important to address the questions of prebiotic and present or past biological activity on Mars. The workshop topics "isotopic mineralogy" and "biology and pre-biotic chemistry" will be addressed from the point of view of the capabilities and limitations of insitu mass spectrometry (MS) techniques such as thermally evolved gas analysis (TEGA) and gas chromatography (GC) surface experiments using MS, in both cases, as a final chemical and isotopic composition detector. Insitu experiments using straightforward adaptations of existing space proven hardware can provide a substantial improvement in the precision and accuracy of our present knowledge of isotopic composition both in molecular and atomic species in the atmosphere and those chemically bound in rocks and soils. Likewise, detection of trace organic species with greatly improved sensitivity from the Viking GCMS experiment is possible using gas enrichment techniques. The limits to precision and accuracy of presently feasible insitu techniques compared to laboratory analysis of returned samples will be explored. The insitu techniques are sufficiently powerful that they can provide a high fidelity method of screening samples obtained from a diverse set of surface locations such as the subsurface or the interior of rocks for selection of those that are the most interesting for return to Earth.

  10. Treatment of malignant biliary obstruction by endoscopic implantation of iridium 192 using a new double lumen endoprosthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siegel, J.H.; Lichtenstein, J.L.; Pullano, W.E.

    1988-07-01

    Iridium 192 seeds contained in a ribbon were preloaded into a new double lumen 11 Fr endoprosthesis which was then inserted into malignant strictures of the bile duct and ampulla and left in place for 48 hours until 5000 rads were delivered to the tumor. The procedure was carried out in 14 patients (7 women, 7 men; mean age, 63.2 years; range, 46 to 86 years). Six patients were treated for cholangiocarcinomas, four with pancreatic carcinomas, and four with ampullary carcinomas. No complications occurred. The mean survival of the group was 7 months (range, 3 days to 27 months). Thismore » new technique provides both intraluminal brachytherapy and biliary drainage and is inserted intraduodenally across the papilla of Vater avoiding puncture of the liver and external hardware required by the percutaneous technique and hardware necessitated with a nasobiliary tube. Following removal of the iridium prosthesis, a large caliber endoprosthesis is inserted for continued decompression. Because of proven efficacy of endoprostheses, this new technique should be considered when intraluminal irradiation is indicated.« less

  11. Upgrading NASA/DOSE laser ranging system control computers

    NASA Technical Reports Server (NTRS)

    Ricklefs, Randall L.; Cheek, Jack; Seery, Paul J.; Emenheiser, Kenneth S.; Hanrahan, William P., III; Mcgarry, Jan F.

    1993-01-01

    Laser ranging systems now managed by the NASA Dynamics of the Solid Earth (DOSE) and operated by the Bendix Field Engineering Corporation, the University of Hawaii, and the University of Texas have produced a wealth on interdisciplinary scientific data over the last three decades. Despite upgrades to the most of the ranging station subsystems, the control computers remain a mix of 1970's vintage minicomputers. These encompass a wide range of vendors, operating systems, and languages, making hardware and software support increasingly difficult. Current technology allows replacement of controller computers at a relatively low cost while maintaining excellent processing power and a friendly operating environment. The new controller systems are now being designed using IBM-PC-compatible 80486-based microcomputers, a real-time Unix operating system (LynxOS), and X-windows/Motif IB, and serial interfaces have been chosen. This design supports minimizing short and long term costs by relying on proven standards for both hardware and software components. Currently, the project is in the design and prototyping stage with the first systems targeted for production in mid-1993.

  12. On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms

    PubMed Central

    He, Li; Zheng, Hao; Wang, Lei

    2017-01-01

    Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions. PMID:29123546

  13. A Holistic Approach to Systems Development

    NASA Technical Reports Server (NTRS)

    Wong, Douglas T.

    2008-01-01

    Introduces a Holistic and Iterative Design Process. Continuous process but can be loosely divided into four stages. More effort spent early on in the design. Human-centered and Multidisciplinary. Emphasis on Life-Cycle Cost. Extensive use of modeling, simulation, mockups, human subjects, and proven technologies. Human-centered design doesn t mean the human factors discipline is the most important Disciplines should be involved in the design: Subsystem vendors, configuration management, operations research, manufacturing engineering, simulation/modeling, cost engineering, hardware engineering, software engineering, test and evaluation, human factors, electromagnetic compatibility, integrated logistics support, reliability/maintainability/availability, safety engineering, test equipment, training systems, design-to-cost, life cycle cost, application engineering etc. 9

  14. A Real-Time Telemetry Simulator of the IUS Spacecraft

    NASA Technical Reports Server (NTRS)

    Drews, Michael E.; Forman, Douglas A.; Baker, Damon M.; Khazoyan, Louis B.; Viazzo, Danilo

    1998-01-01

    A real-time telemetry simulator of the IUS spacecraft has recently entered operation to train Flight Control Teams for the launch of the AXAF telescope from the Shuttle. The simulator has proven to be a successful higher fidelity implementation of its predecessor, while affirming the rapid development methodology used in its design. Although composed of COTS hardware and software, the system simulates the full breadth of the mission: Launch, Pre-Deployment-Checkout, Burn Sequence, and AXAF/IUS separation. Realism is increased through patching the system into the operations facility to simulate IUS telemetry, Shuttle telemetry, and the Tracking Station link (commands and status message).

  15. NEIS (NASA Environmental Information System)

    NASA Technical Reports Server (NTRS)

    Cook, Beth

    1995-01-01

    The NASA Environmental Information System (NEIS) is a tool to support the functions of the NASA Operational Environment Team (NOET). The NEIS is designed to provide a central environmental technology resource drawing on all NASA centers' capabilities, and to support program managers who must ultimately deliver hardware compliant with performance specifications and environmental requirements. The NEIS also tracks environmental regulations, usages of materials and processes, and new technology developments. It has proven to be a useful instrument for channeling information throughout the aerospace community, NASA, other federal agencies, educational institutions, and contractors. The associated paper will discuss the dynamic databases within the NEIS, and the usefulness it provides for environmental compliance efforts.

  16. A new look at deep-sea video

    USGS Publications Warehouse

    Chezar, H.; Lee, J.

    1985-01-01

    A deep-towed photographic system with completely self-contained recording instrumentation and power can obtain color-video and still-photographic transects along rough terrane without need for a long electrically conducting cable. Both the video- and still-camera systems utilize relatively inexpensive and proven off-the-shelf hardware adapted for deep-water environments. The small instrument frame makes the towed sled an ideal photographic tool for use on ship or small-boat operations. The system includes a temperature probe and altimeter that relay data acoustically from the sled to the surface ship. This relay enables the operator to monitor simultaneously water temperature and the precise height off the bottom. ?? 1985.

  17. X-38 Bolt Retractor Subsystem Separation Demonstration

    NASA Technical Reports Server (NTRS)

    Rugless, Fedoria (Editor); Johnston, A. S.; Ahmed, R.; Garrison, J. C.; Gaines, J. L.; Waggoner, J. D.

    2002-01-01

    The Flight Robotics Laboratory FRL successfully demonstrated the X-38 bolt retractor subsystem (BRS). The BRS design was proven safe by testing in the Pyrotechnic Shock Facility (PSI) before being demonstrated in the FRL. This Technical Memorandum describes the BRS, FRL, PSF, and interface hardware. Bolt retraction time, spacecraft simulator acceleration, and a force analysis are also presented. The purpose of the demonstration was to show the FRL capability for spacecraft separation testing using pyrotechnics. Although a formal test was not performed due to schedule and budget constraints, the data will show that the BRS is a successful design concept and the FRL is suitable for future separation tests.

  18. Early Flight Fission Test Facilities (EFF-TF) To Support Near-Term Space Fission Systems

    NASA Astrophysics Data System (ADS)

    van Dyke, Melissa

    2004-02-01

    Through hardware based design and testing, the EFF-TF investigates fission power and propulsion component, subsystems, and integrated system design and performance. Through demonstration of systems concepts (designed by Sandia and Los Alamos National Laboratories) in relevant environments, previous non-nuclear tests in the EFF-TF have proven to be a highly effective method (from both cost and performance standpoint) to identify and resolve integration issues. Ongoing research at the EFF-TF is geared towards facilitating research, development, system integration, and system utilization via cooperative efforts with DOE labs, industry, universities, and other NASA centers. This paper describes the current efforts for 2003.

  19. Evaluation of the feasibility of using the data collection system to operate a network of hydrological and climatological stations at sites remote from normal communication links

    NASA Technical Reports Server (NTRS)

    Perrier, R. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. The General Electric DCP has proven to be a versatile, rugged piece of hardware and has surpassed original expectation; it is very simple to use and does not require skilled staff for its use, installation, and operation. It is well suited for use in remote sites where no power is available. From this experience, it is concluded that the data collection system will be very useful in operating a network of hydrometeorological stations situated in sites remote from normal communication links.

  20. Dementia and art: neuronal intermediate filament inclusion disease and dissolution of artistic creativity.

    PubMed

    Budrys, Valmantas; Skullerud, Kari; Petroska, Donatas; Lengveniene, Jurate; Kaubrys, Gintaras

    2007-01-01

    The paper presents a new case of neuronal intermediate filament inclusion disease (NIFID), a recently described new variant of early-onset frontotemporal dementia. Documented with repetitive brain images, morphologically proven cases additionally endorse evolving the clinical and pathological phenotype of NIFID. For the first time the paper describes the probable influence of NIFID on the artistic creativity of an accomplished artist showing rapid dissolution of artistic talent. Copyright (c) 2007 S. Karger AG, Basel.

  1. Improvements In AF Ablation Outcome Will Be Based More On Technological Advancement Versus Mechanistic Understanding.

    PubMed

    Jiang Md, Chen-Yang; Jiang Ms, Ru-Hong

    2014-01-01

    Atrial fibrillation (AF) is one of the most common cardiac arrhythmias. Catheter ablation has proven more effective than antiarrhythmic drugs in preventing clinical recurrence of AF, however long-term outcome remains unsatisfactory. Ablation strategies have evolved based on progress in mechanistic understanding, and technologies have advanced continuously. This article reviews current mechanistic concepts and technological advancements in AF treatment, and summarizes their impact on improvement of AF ablation outcome.

  2. Discovery radiomics via evolutionary deep radiomic sequencer discovery for pathologically proven lung cancer detection.

    PubMed

    Shafiee, Mohammad Javad; Chung, Audrey G; Khalvati, Farzad; Haider, Masoom A; Wong, Alexander

    2017-10-01

    While lung cancer is the second most diagnosed form of cancer in men and women, a sufficiently early diagnosis can be pivotal in patient survival rates. Imaging-based, or radiomics-driven, detection methods have been developed to aid diagnosticians, but largely rely on hand-crafted features that may not fully encapsulate the differences between cancerous and healthy tissue. Recently, the concept of discovery radiomics was introduced, where custom abstract features are discovered from readily available imaging data. We propose an evolutionary deep radiomic sequencer discovery approach based on evolutionary deep intelligence. Motivated by patient privacy concerns and the idea of operational artificial intelligence, the evolutionary deep radiomic sequencer discovery approach organically evolves increasingly more efficient deep radiomic sequencers that produce significantly more compact yet similarly descriptive radiomic sequences over multiple generations. As a result, this framework improves operational efficiency and enables diagnosis to be run locally at the radiologist's computer while maintaining detection accuracy. We evaluated the evolved deep radiomic sequencer (EDRS) discovered via the proposed evolutionary deep radiomic sequencer discovery framework against state-of-the-art radiomics-driven and discovery radiomics methods using clinical lung CT data with pathologically proven diagnostic data from the LIDC-IDRI dataset. The EDRS shows improved sensitivity (93.42%), specificity (82.39%), and diagnostic accuracy (88.78%) relative to previous radiomics approaches.

  3. History of remote operations and robotics in nuclear facilities. Robotics and Intelligent Systems Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herndon, J.N.

    1992-05-01

    The field of remote technology is continuing to evolve to support man`s efforts to perform tasks in hostile environments. Remote technology has roots which reach into the early history of man. Fireplace pokers, blacksmith`s tongs, and periscopes are examples of the beginnings of remote technology. The technology which we recognize today has evolved over the last 45-plus years to support human operations in hostile environments such as nuclear fission and fusion, space, underwater, hazardous chemical, and hazardous manufacturing. The four major categories of approach to remote technology have been (1) protective clothing and equipment for direct human entry, (2) extendedmore » reach tools using distance for safety, (3) telemanipulators with barriers for safety, and (4) teleoperators incorporating mobility with distance and/or barriers for safety. The government and commercial nuclear industry has driven the development of the majority of the actual teleoperator hardware available today. This hardware has been developed due to the unsatisfactory performance of the protective-clothing approach in many hostile applications. Systems which have been developed include crane/impact wrench systems, unilateral power manipulators, mechanical master/slaves, and servomanipulators. Work for space applications has been primarily research oriented with few successful space applications, although the shuttle`s remote manipulator system has been successful. In the last decade, underwater applications have moved forward significantly, with the offshore oil industry and military applications providing the primary impetus. This document consists of viewgraphs and subtitled figures.« less

  4. History of remote operations and robotics in nuclear facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herndon, J.N.

    1992-01-01

    The field of remote technology is continuing to evolve to support man's efforts to perform tasks in hostile environments. Remote technology has roots which reach into the early history of man. Fireplace pokers, blacksmith's tongs, and periscopes are examples of the beginnings of remote technology. The technology which we recognize today has evolved over the last 45-plus years to support human operations in hostile environments such as nuclear fission and fusion, space, underwater, hazardous chemical, and hazardous manufacturing. The four major categories of approach to remote technology have been (1) protective clothing and equipment for direct human entry, (2) extendedmore » reach tools using distance for safety, (3) telemanipulators with barriers for safety, and (4) teleoperators incorporating mobility with distance and/or barriers for safety. The government and commercial nuclear industry has driven the development of the majority of the actual teleoperator hardware available today. This hardware has been developed due to the unsatisfactory performance of the protective-clothing approach in many hostile applications. Systems which have been developed include crane/impact wrench systems, unilateral power manipulators, mechanical master/slaves, and servomanipulators. Work for space applications has been primarily research oriented with few successful space applications, although the shuttle's remote manipulator system has been successful. In the last decade, underwater applications have moved forward significantly, with the offshore oil industry and military applications providing the primary impetus. This document consists of viewgraphs and subtitled figures.« less

  5. An MBSE Approach to Space Suit Development

    NASA Technical Reports Server (NTRS)

    Cordova, Lauren; Kovich, Christine; Sargusingh, Miriam

    2012-01-01

    The EVA/Space Suit Development Office (ESSD) Systems Engineering and Integration (SE&I) team has utilized MBSE in multiple programs. After developing operational and architectural models, the MBSE framework was expanded to link the requirements space to the system models through functional analysis and interfaces definitions. By documenting all the connections within the technical baseline, ESSD experienced significant efficiency improvements in analysis and identification of change impacts. One of the biggest challenges presented to the MBSE structure was a program transition and restructuring effort, which was completed successfully in 4 months culminating in the approval of a new EVA Technical Baseline. During this time three requirements sets spanning multiple DRMs were streamlined into one NASA-owned Systems Requirement Document (SRD) that successfully identified requirements relevant to the current hardware development effort while remaining extensible to support future hardware developments. A capability-based hierarchy was established to provide a more flexible framework for future space suit development that can support multiple programs with minimal rework of basic EVA/Space Suit requirements. This MBSE approach was most recently applied for generation of an EMU Demonstrator technical baseline being developed for an ISS DTO. The relatively quick turnaround of operational concepts, architecture definition, and requirements for this new suit development has allowed us to test and evolve the MBSE process and framework in an extremely different setting while still offering extensibility and traceability throughout ESSD projects. The ESSD MBSE framework continues to be evolved in order to support integration of all products associated with the SE&I engine.

  6. Evolution of the Space Station Robotic Manipulator

    NASA Technical Reports Server (NTRS)

    Razvi, Shakeel; Burns, Susan H.

    2007-01-01

    The Space Station Remote Manipulator System (SSRMS), Canadarm2, was launched in 2001 and deployed on the International Space Station (ISS). The Canadarm2 has been instrumental in ISS assembly and maintenance. Canadarm2 shares its heritage with the Space Shuttle Arm (Canadarm). This article explores the evolution from the Shuttle Canadarm to the Space Station Canadarm2 design, which incorporates a 7 degree of freedom design, larger joints, and changeable operating base. This article also addresses phased design, redundancy, life and maintainability requirements. The design of Canadarm2 meets unique ISS requirements, including expanded handling capability and the ability to be maintained on orbit. The size of ISS necessitated a mobile manipulator, resulting in the unique capability of Canadarm2 to relocate by performing a walk off to base points located along the Station, and interchanging the tip and base of the manipulator. This provides the manipulator with reach and access to a large part of the Station, enabling on-orbit assembly of the Station and providing support to Extra-Vehicular Activity (EVA). Canadarm2 is evolving based on on-orbit operational experience and new functionality requirements. SSRMS functionality is being developed in phases to support evolving ISS assembly and operation as modules are added and the Station becomes more complex. Changes to sustaining software, hardware architecture, and operations have significantly enhanced SSRMS capability to support ISS mission requirements. As a result of operational experience, SSRMS changes have been implemented for Degraded Joint Operations, Force Moment Sensor Thermal Protection, Enabling Ground Controlled Operations, and Software Commutation. Planned Canadarm2 design modifications include: Force Moment Accommodation, Smart Safing, Separate Safing, and Hot Backup. In summary, Canadarm2 continues to evolve in support of new ISS requirements and improved operations. It is a tribute to the design that this evolution can be accomplished while conducting critical on-orbit operations with minimal hardware changes.

  7. Teleoperated Modular Robots for Lunar Operations

    NASA Technical Reports Server (NTRS)

    Globus, Al; Hornby, Greg; Larchev, Greg; Hancher, Matt; Cannon, Howard; Lohn, Jason

    2004-01-01

    Solar system exploration is currently carried out by special purpose robots exquisitely designed for the anticipated tasks. However, all contingencies for in situ resource utilization (ISRU), human habitat preparation, and exploration will be difficult to anticipate. Furthermore, developing the necessary special purpose mechanisms for deployment and other capabilities is difficult and error prone. For example, the Galileo high gain antenna never opened, severely restricting the quantity of data returned by the spacecraft. Also, deployment hardware is used only once. To address these problems, we are developing teleoperated modular robots for lunar missions, including operations in transit from Earth. Teleoperation of lunar systems from Earth involves a three second speed-of-light delay, but experiment suggests that interactive operations are feasible.' Modular robots typically consist of many identical modules that pass power and data between them and can be reconfigured for different tasks providing great flexibility, inherent redundancy and graceful degradation as modules fail. Our design features a number of different hub, link, and joint modules to simplify the individual modules, lower structure cost, and provide specialized capabilities. Modular robots are well suited for space applications because of their extreme flexibility, inherent redundancy, high-density packing, and opportunities for mass production. Simple structural modules can be manufactured from lunar regolith in situ using molds or directed solar sintering. Software to direct and control modular robots is difficult to develop. We have used genetic algorithms to evolve both the morphology and control system for walking modular robots3 We are currently using evolvable system technology to evolve controllers for modular robots in the ISS glove box. Development of lunar modular robots will require software and physical simulators, including regolith simulation, to enable design and test of robot software and hardware, particularly automation software. Ready access to these simulators could provide opportunities for contest-driven development ala RoboCup (http://www.robocup.org/). Licensing of module designs could provide opportunities in the toy market and for spin-off applications.

  8. Sensor Open System Architecture (SOSA) evolution for collaborative standards development

    NASA Astrophysics Data System (ADS)

    Collier, Charles Patrick; Lipkin, Ilya; Davidson, Steven A.; Baldwin, Rusty; Orlovsky, Michael C.; Ibrahim, Tim

    2017-04-01

    The Sensor Open System Architecture (SOSA) is a C4ISR-focused technical and economic collaborative effort between the Air Force, Navy, Army, the Department of Defense (DoD), Industry, and other Governmental agencies to develop (and incorporate) a technical Open Systems Architecture standard in order to maximize C4ISR sub-system, system, and platform affordability, re-configurability, and hardware/software/firmware re-use. The SOSA effort will effectively create an operational and technical framework for the integration of disparate payloads into C4ISR systems; with a focus on the development of a modular decomposition (defining functions and behaviors) and associated key interfaces (physical and logical) for common multi-purpose architecture for radar, EO/IR, SIGINT, EW, and Communications. SOSA addresses hardware, software, and mechanical/electrical interfaces. The modular decomposition will produce a set of re-useable components, interfaces, and sub-systems that engender reusable capabilities. This, in effect, creates a realistic and affordable ecosystem enabling mission effectiveness through systematic re-use of all available re-composed hardware, software, and electrical/mechanical base components and interfaces. To this end, SOSA will leverage existing standards as much as possible and evolve the SOSA architecture through modification, reuse, and enhancements to achieve C4ISR goals. This paper will present accomplishments over the first year of SOSA initiative.

  9. The Materials Commons: A Collaboration Platform and Information Repository for the Global Materials Community

    NASA Astrophysics Data System (ADS)

    Puchala, Brian; Tarcea, Glenn; Marquis, Emmanuelle. A.; Hedstrom, Margaret; Jagadish, H. V.; Allison, John E.

    2016-08-01

    Accelerating the pace of materials discovery and development requires new approaches and means of collaborating and sharing information. To address this need, we are developing the Materials Commons, a collaboration platform and information repository for use by the structural materials community. The Materials Commons has been designed to be a continuous, seamless part of the scientific workflow process. Researchers upload the results of experiments and computations as they are performed, automatically where possible, along with the provenance information describing the experimental and computational processes. The Materials Commons website provides an easy-to-use interface for uploading and downloading data and data provenance, as well as for searching and sharing data. This paper provides an overview of the Materials Commons. Concepts are also outlined for integrating the Materials Commons with the broader Materials Information Infrastructure that is evolving to support the Materials Genome Initiative.

  10. Fast interactive registration tool for reproducible multi-spectral imaging for wound healing and treatment evaluation

    NASA Astrophysics Data System (ADS)

    Noordmans, Herke J.; de Roode, Rowland; Verdaasdonk, Rudolf

    2007-02-01

    Multi-spectral images of human tissue taken in-vivo often contain image alignment problems as patients have difficulty in retaining their posture during the acquisition time of 20 seconds. Previously, it has been attempted to correct motion errors with image registration software developed for MR or CT data but these algorithms have been proven to be too slow and erroneous for practical use with multi-spectral images. A new software package has been developed which allows the user to play a decisive role in the registration process as the user can monitor the progress of the registration continuously and force it in the right direction when it starts to fail. The software efficiently exploits videocard hardware to gain speed and to provide a perfect subvoxel correspondence between registration field and display. An 8 bit graphic card was used to efficiently register and resample 12 bit images using the hardware interpolation modes present on the graphic card. To show the feasibility of this new registration process, the software was applied in clinical practice evaluating the dosimetry for psoriasis and KTP laser treatment. The microscopic differences between images of normal skin and skin exposed to UV light proved that an affine registration step including zooming and slanting is critical for a subsequent elastic match to have success. The combination of user interactive registration software with optimal addressing the potentials of PC video card hardware greatly improves the speed of multi spectral image registration.

  11. Orion FSW V and V and Kedalion Engineering Lab Insight

    NASA Technical Reports Server (NTRS)

    Mangieri, Mark L.

    2010-01-01

    NASA, along with its prime Orion contractor and its subcontractor s are adapting an avionics system paradigm borrowed from the manned commercial aircraft industry for use in manned space flight systems. Integrated Modular Avionics (IMA) techniques have been proven as a robust avionics solution for manned commercial aircraft (B737/777/787, MD 10/90). This presentation will outline current approaches to adapt IMA, along with its heritage FSW V&V paradigms, into NASA's manned space flight program for Orion. NASA's Kedalion engineering analysis lab is on the forefront of validating many of these contemporary IMA based techniques. Kedalion has already validated many of the proposed Orion FSW V&V paradigms using Orion's precursory Flight Test Article (FTA) Pad Abort 1 (PA-1) program. The Kedalion lab will evolve its architectures, tools, and techniques in parallel with the evolving Orion program.

  12. Early evolution of efficient enzymes and genome organization

    PubMed Central

    2012-01-01

    Background Cellular life with complex metabolism probably evolved during the reign of RNA, when it served as both information carrier and enzyme. Jensen proposed that enzymes of primordial cells possessed broad specificities: they were generalist. When and under what conditions could primordial metabolism run by generalist enzymes evolve to contemporary-type metabolism run by specific enzymes? Results Here we show by numerical simulation of an enzyme-catalyzed reaction chain that specialist enzymes spread after the invention of the chromosome because protocells harbouring unlinked genes maintain largely non-specific enzymes to reduce their assortment load. When genes are linked on chromosomes, high enzyme specificity evolves because it increases biomass production, also by reducing taxation by side reactions. Conclusion The constitution of the genetic system has a profound influence on the limits of metabolic efficiency. The major evolutionary transition to chromosomes is thus proven to be a prerequisite for a complex metabolism. Furthermore, the appearance of specific enzymes opens the door for the evolution of their regulation. Reviewers This article was reviewed by Sándor Pongor, Gáspár Jékely, and Rob Knight. PMID:23114029

  13. VHDL simulation with access to transistor models

    NASA Technical Reports Server (NTRS)

    Gibson, J.

    1991-01-01

    Hardware description languages such as VHDL have evolved to aid in the design of systems with large numbers of elements and a wide range of electronic and logical abstractions. For high performance circuits, behavioral models may not be able to efficiently include enough detail to give designers confidence in a simulation's accuracy. One option is to provide a link between the VHDL environment and a transistor level simulation environment. The coupling of the Vantage Analysis Systems VHDL simulator and the NOVA simulator provides the combination of VHDL modeling and transistor modeling.

  14. Motivational contracting in space programs - Government and industry prospectives

    NASA Technical Reports Server (NTRS)

    Clough, D. R.

    1985-01-01

    NASA's Marshall Space Flight Center has used incentive-free policies in contracting for Apollo's Saturn Launch vehicle hardware, as well as award-fee contracts for major development and early production programs in the case of the Space Shuttle Program. These programs have evolved to a point at which multiple incentive fees are useful in motivating cost reductions and assuring timely achievement of delivery requirements and flight mission goals. An examination is presently conducted of the relative success of these motivation-oriented techniques, drawing on the comments of both government and industry personnel.

  15. DIY 3D printing of custom orthopaedic implants: a proof of concept study.

    PubMed

    Frame, Mark; Leach, William

    2014-03-01

    3D printing is an emerging technology that is primarily used for aiding the design and prototyping of implants. As this technology has evolved it has now become possible to produce functional and definitive implants manufactured using a 3D printing process. This process, however, previously required a large financial investment in complex machinery and professionals skilled in 3D product design. Our pilot study's aim was to design and create a 3D printed custom orthopaedic implant using only freely available consumer hardware and software.

  16. Computers in health care for the 21st century.

    PubMed

    O'Desky, R I; Ball, M J; Ball, E E

    1990-03-01

    As the world enters the last decade of the 20th Century, there is a great deal of speculation about the effect of computers on the future delivery of health care. In this article, the authors attempt to identify some of the evolving computer technologies and anticipate what effect they will have by the year 2000. Rather than listing potential accomplishments, each of the affected areas: hardware, software, health care systems and communications, are presented in an evolutionary manner so the reader can better appreciate where we have been and where we are going.

  17. Optical Bench Interferometer - From LISA Pathfinder to NGO/eLISA

    NASA Astrophysics Data System (ADS)

    Taylor, A.; d'Arcio, L.; Bogenstahl, J.; Danzmann, K.; Diekmann, C.; Fitzsimons, E. D.; Gerberding, O.; Heinzel, G.; Hennig, J.-S.; Hogenhuis, H.; Killow, C. J.; Lieser, M.; Lucarelli, S.; Nikolov, S.; Perreur-Lloyd, M.; Pijnenburg, J.; Robertson, D. I.; Sohmer, A.; Tröbs, M.; Ward, H.; Weise, D.

    2013-01-01

    We present a short summary of some optical bench construction and alignment developments that build on experience gained during the LISA Pathfinder optical bench assembly. These include evolved fibre injectors, a new beam vector measurement system, and thermally stable mounting hardware. The beam vector measurement techniques allow the alignment of beams to targets with absolute accuracy of a few microns and 20 microradians. We also describe a newly designed ultra-low-return beam dump that is expected to be a crucial element in the control of ghost beams on the optical benches.

  18. Space Shuttle Abort Evolution

    NASA Technical Reports Server (NTRS)

    Henderson, Edward M.; Nguyen, Tri X.

    2011-01-01

    This paper documents some of the evolutionary steps in developing a rigorous Space Shuttle launch abort capability. The paper addresses the abort strategy during the design and development and how it evolved during Shuttle flight operations. The Space Shuttle Program made numerous adjustments in both the flight hardware and software as the knowledge of the actual flight environment grew. When failures occurred, corrections and improvements were made to avoid a reoccurrence and to provide added capability for crew survival. Finally some lessons learned are summarized for future human launch vehicle designers to consider.

  19. Production of a small-circulation medical journal using desktop publishing methods.

    PubMed

    Peters, B A

    1994-07-01

    Since its inception in January 1988, the Baylor University Medical Center Proceedings, a quarterly medical journal, has been published by the few staff of the Scientific Publications Office (Baylor Research Institute, Dallas, Texas, USA) using microcomputers and page-makeup software in conjunction with a commercial printing company. This article outlines the establishment of the journal; the steps used in the publication process; the software and hardware used; and the changes in design, content, and circulation that have taken place as the journal and the technology used to create it have evolved.

  20. The evolving trend in spacecraft health analysis

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, Russell L.

    1993-01-01

    The Space Flight Operations Center inaugurated the concept of a central data repository for spacecraft data and the distribution of computing power to the end users for that data's analysis at the Jet Propulsion Laboratory. The Advanced Multimission Operations System is continuing the evolution of this concept as new technologies emerge. Constant improvements in data management tools, data visualization, and hardware lead to ever expanding ideas for improving the analysis of spacecraft health in an era of budget constrained mission operations systems. The foundation of this evolution, its history, and its current plans will be discussed.

  1. An evolutionary solution to anesthesia automated record keeping.

    PubMed

    Bicker, A A; Gage, J S; Poppers, P J

    1998-08-01

    In the course of five years the development of an automated anesthesia record keeper has evolved through nearly a dozen stages, each marked by new features and sophistication. Commodity PC hardware and software minimized development costs. Object oriented analysis, programming and design supported the process of change. In addition, we developed an evolutionary strategy that optimized motivation, risk management, and maximized return on investment. Besides providing record keeping services, the system supports educational and research activities and through a flexible plotting paradigm, supports each anesthesiologist's focus on physiological data during and after anesthesia.

  2. The 4.5 inch diameter IPV Ni-H2 cell development program

    NASA Technical Reports Server (NTRS)

    Miller, L.

    1986-01-01

    Interest in larger capacity Ni-H2 battery cells for space applications has resulted in the initiation of a development/qualification/production program. Cell component design was completed and component hardware fabricated and/or delivered. Finished cell design projections demonstrate favorable specific energies in the range of 70 to 75 Whr/Kg (32 to 34 Whr/Lb) for capacities of 100 to 250 Ah. It is further planned during this effort to evaluate the advanced cell design technology which has evolved from the work conducted at the NASA/Lewis Research Center.

  3. The 4.5 inch diameter IPV Ni-H2 cell development program

    NASA Astrophysics Data System (ADS)

    Miller, L.

    1986-09-01

    Interest in larger capacity Ni-H2 battery cells for space applications has resulted in the initiation of a development/qualification/production program. Cell component design was completed and component hardware fabricated and/or delivered. Finished cell design projections demonstrate favorable specific energies in the range of 70 to 75 Whr/Kg (32 to 34 Whr/Lb) for capacities of 100 to 250 Ah. It is further planned during this effort to evaluate the advanced cell design technology which has evolved from the work conducted at the NASA/Lewis Research Center.

  4. The Living With a Star Space Environment Testbed Experiments

    NASA Technical Reports Server (NTRS)

    Xapsos, Michael A.

    2014-01-01

    The focus of the Living With a Star (LWS) Space Environment Testbed (SET) program is to improve the performance of hardware in the space radiation environment. The program has developed a payload for the Air Force Research Laboratory (AFRL) Demonstration and Science Experiments (DSX) spacecraft that is scheduled for launch in August 2015 on the SpaceX Falcon Heavy rocket. The primary structure of DSX is an Evolved Expendable Launch Vehicle (EELV) Secondary Payload Adapter (ESPA) ring. DSX will be in a Medium Earth Orbit (MEO). This oral presentation will describe the SET payload.

  5. A Hardware-in-the-Loop Testbed for Spacecraft Formation Flying Applications

    NASA Technical Reports Server (NTRS)

    Leitner, Jesse; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    The Formation Flying Test Bed (FFTB) at NASA Goddard Space Flight Center (GSFC) is being developed as a modular, hybrid dynamic simulation facility employed for end-to-end guidance, navigation, and control (GN&C) analysis and design for formation flying clusters and constellations of satellites. The FFTB will support critical hardware and software technology development to enable current and future missions for NASA, other government agencies, and external customers for a wide range of missions, particularly those involving distributed spacecraft operations. The initial capabilities of the FFTB are based upon an integration of high fidelity hardware and software simulation, emulation, and test platforms developed at GSFC in recent years; including a high-fidelity GPS simulator which has been a fundamental component of the Guidance, Navigation, and Control Center's GPS Test Facility. The FFTB will be continuously evolving over the next several years from a too[ with initial capabilities in GPS navigation hardware/software- in-the- loop analysis and closed loop GPS-based orbit control algorithm assessment to one with cross-link communications and relative navigation analysis and simulation capability. Eventually the FFT13 will provide full capability to support all aspects of multi-sensor, absolute and relative position determination and control, in all (attitude and orbit) degrees of freedom, as well as information management for satellite clusters and constellations. In this paper we focus on the architecture for the FFT13 as a general GN&C analysis environment for the spacecraft formation flying community inside and outside of NASA GSFC and we briefly reference some current and future activities which will drive the requirements and development.

  6. PsychoPy--Psychophysics software in Python.

    PubMed

    Peirce, Jonathan W

    2007-05-15

    The vast majority of studies into visual processing are conducted using computer display technology. The current paper describes a new free suite of software tools designed to make this task easier, using the latest advances in hardware and software. PsychoPy is a platform-independent experimental control system written in the Python interpreted language using entirely free libraries. PsychoPy scripts are designed to be extremely easy to read and write, while retaining complete power for the user to customize the stimuli and environment. Tools are provided within the package to allow everything from stimulus presentation and response collection (from a wide range of devices) to simple data analysis such as psychometric function fitting. Most importantly, PsychoPy is highly extensible and the whole system can evolve via user contributions. If a user wants to add support for a particular stimulus, analysis or hardware device they can look at the code for existing examples, modify them and submit the modifications back into the package so that the whole community benefits.

  7. AdaNET Dynamic Software Inventory (DSI) prototype component acquisition plan

    NASA Technical Reports Server (NTRS)

    Hanley, Lionel

    1989-01-01

    A component acquisition plan contains the information needed to evaluate, select, and acquire software and hardware components necessary for successful completion of the AdaNET Dynamic Software Inventory (DSI) Management System Prototype. This plan will evolve and be applicable to all phases of the DSI prototype development. Resources, budgets, schedules, and organizations related to component acquisition activities are provided. A purpose and description of a software or hardware component which is to be acquired are presented. Since this is a plan for acquisition of all components, this section is not applicable. The procurement activities and events conducted by the acquirer are described and who is responsible is identified, where the activity will be performed, and when the activities will occur for each planned procurement. Acquisition requirements describe the specific requirements and standards to be followed during component acquisition. The activities which will take place during component acquisition are described. A list of abbreviations and acronyms, and a glossary are contained.

  8. PsychoPy—Psychophysics software in Python

    PubMed Central

    Peirce, Jonathan W.

    2007-01-01

    The vast majority of studies into visual processing are conducted using computer display technology. The current paper describes a new free suite of software tools designed to make this task easier, using the latest advances in hardware and software. PsychoPy is a platform-independent experimental control system written in the Python interpreted language using entirely free libraries. PsychoPy scripts are designed to be extremely easy to read and write, while retaining complete power for the user to customize the stimuli and environment. Tools are provided within the package to allow everything from stimulus presentation and response collection (from a wide range of devices) to simple data analysis such as psychometric function fitting. Most importantly, PsychoPy is highly extensible and the whole system can evolve via user contributions. If a user wants to add support for a particular stimulus, analysis or hardware device they can look at the code for existing examples, modify them and submit the modifications back into the package so that the whole community benefits. PMID:17254636

  9. A History of Space Shuttle Main Engine (SSME) Redline Limits Management

    NASA Technical Reports Server (NTRS)

    Arnold, Thomas M.

    2011-01-01

    The Space Shuttle Main Engine (SSME) has several "redlines", which are operational limits designated to preclude a catastrophic shutdown of the SSME. The Space Shuttle Orbiter utilizes a combination of hardware and software to enable or disable the automated redline shutdown capability. The Space Shuttle is launched with the automated SSME redline limits enabled, but there are many scenarios which may result in the manual disabling of the software by the onboard crew. The operational philosophy for manually enabling and disabling the redline limits software has evolved continuously throughout the history of the Space Shuttle Program, due to events such as SSME hardware changes and updates to Space Shuttle contingency abort software. In this paper, the evolution of SSME redline limits management will be fully reviewed, including the operational scenarios which call for manual intervention, and the events that triggered changes to the philosophy. Following this review, improvements to the management of redline limits for future spacecraft will be proposed.

  10. Stochastic DT-MRI connectivity mapping on the GPU.

    PubMed

    McGraw, Tim; Nadar, Mariappan

    2007-01-01

    We present a method for stochastic fiber tract mapping from diffusion tensor MRI (DT-MRI) implemented on graphics hardware. From the simulated fibers we compute a connectivity map that gives an indication of the probability that two points in the dataset are connected by a neuronal fiber path. A Bayesian formulation of the fiber model is given and it is shown that the inversion method can be used to construct plausible connectivity. An implementation of this fiber model on the graphics processing unit (GPU) is presented. Since the fiber paths can be stochastically generated independently of one another, the algorithm is highly parallelizable. This allows us to exploit the data-parallel nature of the GPU fragment processors. We also present a framework for the connectivity computation on the GPU. Our implementation allows the user to interactively select regions of interest and observe the evolving connectivity results during computation. Results are presented from the stochastic generation of over 250,000 fiber steps per iteration at interactive frame rates on consumer-grade graphics hardware.

  11. Color postprocessing for 3-dimensional finite element mesh quality evaluation and evolving graphical workstation

    NASA Technical Reports Server (NTRS)

    Panthaki, Malcolm J.

    1987-01-01

    Three general tasks on general-purpose, interactive color graphics postprocessing for three-dimensional computational mechanics were accomplished. First, the existing program (POSTPRO3D) is ported to a high-resolution device. In the course of this transfer, numerous enhancements are implemented in the program. The performance of the hardware was evaluated from the point of view of engineering postprocessing, and the characteristics of future hardware were discussed. Second, interactive graphical tools implemented to facilitate qualitative mesh evaluation from a single analysis. The literature was surveyed and a bibliography compiled. Qualitative mesh sensors were examined, and the use of two-dimensional plots of unaveraged responses on the surface of three-dimensional continua was emphasized in an interactive color raster graphics environment. Finally, a postprocessing environment was designed for state-of-the-art workstation technology. Modularity, personalization of the environment, integration of the engineering design processes, and the development and use of high-level graphics tools are some of the features of the intended environment.

  12. Computer-automated evolution of an X-band antenna for NASA's Space Technology 5 mission.

    PubMed

    Hornby, Gregory S; Lohn, Jason D; Linden, Derek S

    2011-01-01

    Whereas the current practice of designing antennas by hand is severely limited because it is both time and labor intensive and requires a significant amount of domain knowledge, evolutionary algorithms can be used to search the design space and automatically find novel antenna designs that are more effective than would otherwise be developed. Here we present our work in using evolutionary algorithms to automatically design an X-band antenna for NASA's Space Technology 5 (ST5) spacecraft. Two evolutionary algorithms were used: the first uses a vector of real-valued parameters and the second uses a tree-structured generative representation for constructing the antenna. The highest-performance antennas from both algorithms were fabricated and tested and both outperformed a hand-designed antenna produced by the antenna contractor for the mission. Subsequent changes to the spacecraft orbit resulted in a change in requirements for the spacecraft antenna. By adjusting our fitness function we were able to rapidly evolve a new set of antennas for this mission in less than a month. One of these new antenna designs was built, tested, and approved for deployment on the three ST5 spacecraft, which were successfully launched into space on March 22, 2006. This evolved antenna design is the first computer-evolved antenna to be deployed for any application and is the first computer-evolved hardware in space.

  13. Current Status of Percutaneous Transhepatic Biliary Drainage in Palliation of Malignant Obstructive Jaundice: A Review

    PubMed Central

    Chandrashekhara, SH; Gamanagatti, S; Singh, Anuradha; Bhatnagar, Sushma

    2016-01-01

    Malignancies leading to obstructive jaundice present too late to perform surgery with a curative intent. Due to inexorably progressing hyperbilirubinemia with its consequent deleterious effects, drainage needs to established even in advanced cases. Percutaneous transhepatic biliary drainage (PTBD) and endoscopic retrograde cholangiopancreatography (ERCP) are widely used palliative procedures each with its own merits and lacunae. With the current state-of-the-art PTBD technique consequent upon procedural and hardware improvement, it is equaling ERCP regarding technical success and complications. In addition, there is a reduction in immediate procedure-related mortality with proven survival benefit. Nonetheless, it is the only imminent lifesaving procedure in cholangitis and sepsis. PMID:27803558

  14. A Planetarium Inside Your Office: Virtual Reality in the Dome Production Pipeline

    NASA Astrophysics Data System (ADS)

    Summers, Frank

    2018-01-01

    Producing astronomy visualization sequences for a planetarium without ready access to a dome is a distorted geometric challenge. Fortunately, one can now use virtual reality (VR) to simulate a dome environment without ever leaving one's office chair. The VR dome experience has proven to be a more than suitable pre-visualization method that requires only modest amounts of processing beyond the standard production pipeline. It also provides a crucial testbed for identifying, testing, and fixing the visual constraints and artifacts that arise in a spherical presentation environment. Topics adreesed here will include rendering, geometric projection, movie encoding, software playback, and hardware setup for a virtual dome using VR headsets.

  15. Operational Concept for the NASA Constellation Program's Ares I Crew Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Best, Joel; Chavers, Greg; Richardson, Lea; Cruzen, Craig

    2008-01-01

    Ares I design brings together innovation and new technologies with established infrastructure and proven heritage hardware to achieve safe, reliable, and affordable human access to space. NASA has 50 years of experience from Apollo and Space Shuttle. The Marshall Space Flight Center's Mission Operations Laboratory is leading an operability benchmarking effort to compile operations and supportability lessons learned from large launch vehicle systems, both domestically and internationally. Ares V will be maturing as the Shuttle is retired and the Ares I design enters the production phase. More details on the Ares I and Ares V will be presented at SpaceOps 2010 in Huntsville, Alabama, U.S.A., April 2010.

  16. Aerial Radiological Measuring System (ARMS): systems, procedures and sensitivity (1976)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyns, P K

    1976-07-01

    This report describes the Aerial Radiological Measuring System (ARMS) designed and operated by EG and G, Inc., for the Energy Research and Development Administration's (ERDA) Division of Operational Safety with the cooperation of the Nuclear Regulatory Commission. Designed to rapidly survey large areas for low-level man-made radiation, the ARMS has also proven extremely useful in locating lost radioactive sources of relatively low activity. The system consists of sodium iodide scintillation detectors, data formatting and recording equipment, positioning equipment, meteorological instruments, direct readout hardware, and data analysis equipment. The instrumentation, operational procedures, data reduction techniques and system sensitivities are described, togethermore » with their applications and sample results.« less

  17. Integrated Vehicle Ground Vibration Testing in Support of Launch Vehicle Loads and Controls Analysis

    NASA Technical Reports Server (NTRS)

    Tuma, Margaret L.; Chenevert, Donald J.

    2009-01-01

    NASA has conducted dynamic tests on each major launch vehicle during the past 45 years. Each test provided invaluable data to correlate and correct analytical models. GVTs result in hardware changes to Saturn and Space Shuttle, ensuring crew and vehicle safety. Ares I IVGT will provide test data such as natural frequencies, mode shapes, and damping to support successful Ares I flights. Testing will support controls analysis by providing data to reduce model uncertainty. Value of testing proven by past launch vehicle successes and failures. Performing dynamic testing on Ares vehicles will provide confidence that the launch vehicles will be safe and successful in their missions.

  18. Efficient Radiative Transfer for Dynamically Evolving Stratified Atmospheres

    NASA Astrophysics Data System (ADS)

    Judge, Philip G.

    2017-12-01

    We present a fast multi-level and multi-atom non-local thermodynamic equilibrium radiative transfer method for dynamically evolving stratified atmospheres, such as the solar atmosphere. The preconditioning method of Rybicki & Hummer (RH92) is adopted. But, pressed for the need of speed and stability, a “second-order escape probability” scheme is implemented within the framework of the RH92 method, in which frequency- and angle-integrals are carried out analytically. While minimizing the computational work needed, this comes at the expense of numerical accuracy. The iteration scheme is local, the formal solutions for the intensities are the only non-local component. At present the methods have been coded for vertical transport, applicable to atmospheres that are highly stratified. The probabilistic method seems adequately fast, stable, and sufficiently accurate for exploring dynamical interactions between the evolving MHD atmosphere and radiation using current computer hardware. Current 2D and 3D dynamics codes do not include this interaction as consistently as the current method does. The solutions generated may ultimately serve as initial conditions for dynamical calculations including full 3D radiative transfer. The National Center for Atmospheric Research is sponsored by the National Science Foundation.

  19. Generation of Silicic Melts in the Early Izu-Bonin Arc Recorded by Detrital Zircons in Proximal Arc Volcaniclastic Rocks From the Philippine Sea

    NASA Astrophysics Data System (ADS)

    Barth, A. P.; Tani, K.; Meffre, S.; Wooden, J. L.; Coble, M. A.; Arculus, R. J.; Ishizuka, O.; Shukle, J. T.

    2017-10-01

    A 1.2 km thick Paleogene volcaniclastic section at International Ocean Discovery Program Site 351-U1438 preserves the deep-marine, proximal record of Izu-Bonin oceanic arc initiation, and volcano evolution along the Kyushu-Palau Ridge (KPR). Pb/U ages and trace element compositions of zircons recovered from volcaniclastic sandstones preserve a remarkable temporal record of juvenile island arc evolution. Pb/U ages ranging from 43 to 27 Ma are compatible with provenance in one or more active arc edifices of the northern KPR. The abundances of selected trace elements with high concentrations provide insight into the genesis of U1438 detrital zircon host melts, and represent useful indicators of both short and long-term variations in melt compositions in arc settings. The Site U1438 zircons span the compositional range between zircons from mid-ocean ridge gabbros and zircons from relatively enriched continental arcs, as predicted for melts in a primitive oceanic arc setting derived from a highly depleted mantle source. Melt zircon saturation temperatures and Ti-in-zircon thermometry suggest a provenance in relatively cool and silicic melts that evolved toward more Th and U-rich compositions with time. Th, U, and light rare earth element enrichments beginning about 35 Ma are consistent with detrital zircons recording development of regional arc asymmetry and selective trace element-enriched rear arc silicic melts as the juvenile Izu-Bonin arc evolved.

  20. SU-F-BRD-13: Quantum Annealing Applied to IMRT Beamlet Intensity Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazareth, D; Spaans, J

    Purpose: We report on the first application of quantum annealing (QA) to the process of beamlet intensity optimization for IMRT. QA is a new technology, which employs novel hardware and software techniques to address various discrete optimization problems in many fields. Methods: We apply the D-Wave Inc. proprietary hardware, which natively exploits quantum mechanical effects for improved optimization. The new QA algorithm, running on this hardware, is most similar to simulated annealing, but relies on natural processes to directly minimize the free energy of a system. A simple quantum system is slowly evolved into a classical system, representing the objectivemore » function. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitation of ∼500 binary variables. The beamlet dose matrices were computed using CERR, and an objective function was defined based on typical clinical constraints, including dose-volume objectives. The objective function was discretized, and the QA method was compared to two standard optimization Methods: simulated annealing and Tabu search, run on a conventional computing cluster. Results: Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the SA. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu, and 22.9 for the SA. The QA algorithm required 27–38% of the time required by the other two methods. Conclusion: In terms of objective function value, the QA performance was similar to Tabu but less effective than the SA. However, its speed was 3–4 times faster than the other two methods. This initial experiment suggests that QA-based heuristics may offer significant speedup over conventional clinical optimization methods, as quantum annealing hardware scales to larger sizes.« less

  1. The Mishin mission, December 1962 - December 1993

    NASA Astrophysics Data System (ADS)

    Vick, Charles P.

    1994-09-01

    Despite the large amount of information that has now emerged on the Soviet manned Lunar effort, many unanswered questions remain about hardware and poorly defined historical related questions. Only about 45% of the full program story had been told and officially released as of the end of 1993. Much of the program still remains secret, but thanks to seven direct meetings with retired General Chief Designer Acad. V. P. Mishin many hardware questions have been answered. It has now become possible to define the L3 spacecraft hardware details precisely. Three of the drawings were, by request, signed by Acad. V. P. Mishin even though they were still undergoing revisions. He had specified changes which have been completed and registered copies for himself. Subsequent changes were made in the light of actual photographs of the hardware and discussions with the individual component designers for precise working understanding of the details. This proved critical in understanding the docking systems design configuration details. The results is a series of drawings on the Soviet manned Lunar program hardware reflecting many years of research, though only part of the total series of drawings that have been developed on the programmes physical layout. Detail diagrams for the following systems are presented. (1) The Soviet Manned Lunar Landing Spacecraft L-3 and Manned Circumnavigation Spacecraft Zond 7K-L1; (2) The N1 Soviet Manned Lunar Program booster systems layout for the L3 payload; (3) The N1-L3 competitors UR-700, UR-700M (UR 900) and R-56. The N1-L3 featured for comparison is the 1969 design variant. Also shown is Proton LK-1 that evolved to Proton Zoned SL-12; and (4) N1-L3 and the N1-L3M compared with the Saturn-V and the G-1-e design concept.

  2. Fast maximum intensity projections of large medical data sets by exploiting hierarchical memory architectures.

    PubMed

    Kiefer, Gundolf; Lehmann, Helko; Weese, Jürgen

    2006-04-01

    Maximum intensity projections (MIPs) are an important visualization technique for angiographic data sets. Efficient data inspection requires frame rates of at least five frames per second at preserved image quality. Despite the advances in computer technology, this task remains a challenge. On the one hand, the sizes of computed tomography and magnetic resonance images are increasing rapidly. On the other hand, rendering algorithms do not automatically benefit from the advances in processor technology, especially for large data sets. This is due to the faster evolving processing power and the slower evolving memory access speed, which is bridged by hierarchical cache memory architectures. In this paper, we investigate memory access optimization methods and use them for generating MIPs on general-purpose central processing units (CPUs) and graphics processing units (GPUs), respectively. These methods can work on any level of the memory hierarchy, and we show that properly combined methods can optimize memory access on multiple levels of the hierarchy at the same time. We present performance measurements to compare different algorithm variants and illustrate the influence of the respective techniques. On current hardware, the efficient handling of the memory hierarchy for CPUs improves the rendering performance by a factor of 3 to 4. On GPUs, we observed that the effect is even larger, especially for large data sets. The methods can easily be adjusted to different hardware specifics, although their impact can vary considerably. They can also be used for other rendering techniques than MIPs, and their use for more general image processing task could be investigated in the future.

  3. Internal model control for industrial wireless plant using WirelessHART hardware-in-the-loop simulator.

    PubMed

    Tran, Chung Duc; Ibrahim, Rosdiazli; Asirvadam, Vijanth Sagayan; Saad, Nordin; Sabo Miya, Hassan

    2018-04-01

    The emergence of wireless technologies such as WirelessHART and ISA100 Wireless for deployment at industrial process plants has urged the need for research and development in wireless control. This is in view of the fact that the recent application is mainly in monitoring domain due to lack of confidence in control aspect. WirelessHART has an edge over its counterpart as it is based on the successful Wired HART protocol with over 30 million devices as of 2009. Recent works on control have primarily focused on maintaining the traditional PID control structure which is proven not adequate for the wireless environment. In contrast, Internal Model Control (IMC), a promising technique for delay compensation, disturbance rejection and setpoint tracking has not been investigated in the context of WirelessHART. Therefore, this paper discusses the control design using IMC approach with a focus on wireless processes. The simulation and experimental results using real-time WirelessHART hardware-in-the-loop simulator (WH-HILS) indicate that the proposed approach is more robust to delay variation of the network than the PID. Copyright © 2017. Published by Elsevier Ltd.

  4. Security screening via computational imaging using frequency-diverse metasurface apertures

    NASA Astrophysics Data System (ADS)

    Smith, David R.; Reynolds, Matthew S.; Gollub, Jonah N.; Marks, Daniel L.; Imani, Mohammadreza F.; Yurduseven, Okan; Arnitz, Daniel; Pedross-Engel, Andreas; Sleasman, Timothy; Trofatter, Parker; Boyarsky, Michael; Rose, Alec; Odabasi, Hayrettin; Lipworth, Guy

    2017-05-01

    Computational imaging is a proven strategy for obtaining high-quality images with fast acquisition rates and simpler hardware. Metasurfaces provide exquisite control over electromagnetic fields, enabling the radiated field to be molded into unique patterns. The fusion of these two concepts can bring about revolutionary advances in the design of imaging systems for security screening. In the context of computational imaging, each field pattern serves as a single measurement of a scene; imaging a scene can then be interpreted as estimating the reflectivity distribution of a target from a set of measurements. As with any computational imaging system, the key challenge is to arrive at a minimal set of measurements from which a diffraction-limited image can be resolved. Here, we show that the information content of a frequency-diverse metasurface aperture can be maximized by design, and used to construct a complete millimeter-wave imaging system spanning a 2 m by 2 m area, consisting of 96 metasurfaces, capable of producing diffraction-limited images of human-scale targets. The metasurfacebased frequency-diverse system presented in this work represents an inexpensive, but tremendously flexible alternative to traditional hardware paradigms, offering the possibility of low-cost, real-time, and ubiquitous screening platforms.

  5. Software Defined Radios - Architectures, Systems and Functions

    NASA Technical Reports Server (NTRS)

    Sims, William H.

    2017-01-01

    Software Defined Radio is an industry term describing a method of utilizing a minimum amount of Radio Frequency (RF)/analog electronics before digitization takes place. Upon digitization all other functions are performed in software/firmware. There are as many different types of SDRs as there are data systems. Software Defined Radio (SDR) technology has been proven in the commercial sector since the early 90's. Today's rapid advancement in mobile telephone reliability and power management capabilities exemplifies the effectiveness of the SDR technology for the modern communications market. In contrast the foundations of transponder technology presently qualified for satellite applications were developed during the early space program of the 1960's. SDR technology offers potential to revolutionize satellite transponder technology by increasing science data through-put capability by at least an order of magnitude. While the SDR is adaptive in nature and is "One-size-fits-all" by design, conventional transponders are built to a specific platform and must be redesigned for every new bus. The SDR uses a minimum amount of analog/Radio Frequency components to up/down-convert the RF signal to/from a digital format. Once analog data is digitized, all processing is performed using hardware logic. Typical SDR processes include; filtering, modulation, up/down converting and demodulation. This presentation will show how the emerging SDR market has leveraged the existing commercial sector to provide a path to a radiation tolerant SDR transponder. These innovations will reduce the cost of transceivers, a decrease in power requirements and a commensurate reduction in volume. A second pay-off is the increased flexibility of the SDR by allowing the same hardware to implement multiple transponder types by altering hardware logic - no change of analog hardware is required - all of which can be ultimately accomplished in orbit. This in turn would provide high capability and low cost transponder to programs of all sizes.

  6. Software Defined Radios - Architectures, Systems and Functions

    NASA Technical Reports Server (NTRS)

    Sims, Herb

    2017-01-01

    Software Defined Radio is an industry term describing a method of utilizing a minimum amount of Radio Frequency (RF)/analog electronics before digitization takes place. Upon digitization all other functions are performed in software/firmware. There are as many different types of SDRs as there are data systems. Software Defined Radio (SDR) technology has been proven in the commercial sector since the early 90's. Today's rapid advancement in mobile telephone reliability and power management capabilities exemplifies the effectiveness of the SDR technology for the modern communications market. In contrast the foundations of transponder technology presently qualified for satellite applications were developed during the early space program of the 1960's. SDR technology offers potential to revolutionize satellite transponder technology by increasing science data through-put capability by at least an order of magnitude. While the SDR is adaptive in nature and is "One-size-fits-all" by design, conventional transponders are built to a specific platform and must be redesigned for every new bus. The SDR uses a minimum amount of analog/Radio Frequency components to up/down-convert the RF signal to/from a digital format. Once analog data is digitized, all processing is performed using hardware logic. Typical SDR processes include; filtering, modulation, up/down converting and demodulation. This presentation will show how the emerging SDR market has leveraged the existing commercial sector to provide a path to a radiation tolerant SDR transponder. These innovations will reduce the cost of transceivers, a decrease in power requirements and a commensurate reduction in volume. A second pay-off is the increased flexibility of the SDR by allowing the same hardware to implement multiple transponder types by altering hardware logic - no change of analog hardware is required - all of which can be ultimately accomplished in orbit. This in turn would provide high capability and low cost transponder to programs of all sizes

  7. Testing to Transition the J-2X from Paper to Hardware

    NASA Technical Reports Server (NTRS)

    Byrd, Tom

    2010-01-01

    The J-2X Upper Stage Engine (USE) will be the first new human-rated upper stage engine since the Apollo program of the 1960s. It is designed to carry the Ares I and Ares V into orbit and send the Ares V to the Moon as part of NASA's Constellation Program. This paper will provide an overview of progress on the design, testing, and manufacturing of this new engine in 2009 and 2010. The J-2X embodies the program goals of basing the design on proven technology and experience and seeking commonality between the Ares vehicles as a way to minimize risk, shorten development times, and live within current budget constraints. It is based on the proven J-2 engine used on the Saturn IB and Saturn V launch vehicles. The prime contractor for the J-2X is Pratt & Whitney Rocketdyne (PWR), which is under a design, development, test, and engineering (DDT&E) contract covering the period from June 2006 through September 2014. For Ares I, the J-2X will provide engine start at approximately 190,000 feet, operate roughly 500 seconds, and shut down. For Ares V, the J-2X will start at roughly 190,000 feet to place the Earth departure stage (EDS) in orbit, shut down and loiter for up to five days, re-start on command and operate for roughly 300 seconds at its secondary power level to perform trans lunar injection (TLI), followed by final engine shutdown. The J-2X development effort focuses on four key areas: early risk mitigation, design risk mitigation, component and subassembly testing, and engine system testing. Following that plan, the J-2X successfully completed its critical design review (CDR) in 2008, and it has made significant progress in 2009 and 2010 in moving from the drawing board to the machine shop and test stand. Post-CDR manufacturing is well under way, including PWR in-house and vendor hardware. In addition, a wide range of component and sub-component tests have been completed, and more component tests are planned. Testing includes heritage powerpack, turbopump inducer water flow, turbine air flow, turbopump seal testing, main injector and gas generator, injector testing, augmented spark igniter testing, nozzle side loads cold flow testing, nozzle extension film cooling flow testing, control system testing with hardware in the loop, and nozzle extension emissivity coating tests. In parallel with hardware manufacturing, work is progressing on the new A-3 test stand to support full duration altitude testing. The Stennis A-2 test stand is scheduled to be turned over to the Constellation Program in September 2010 to be modified for J-2X testing also. As the structural steel was rising on the A-3 stand, work was under way in the nearby E complex on the chemical steam generator and subscale diffuser concepts to be used to evacuate the A-3 test cell and simulate altitude conditions.

  8. An Open-Source Hardware and Software System for Acquisition and Real-Time Processing of Electrophysiology during High Field MRI

    PubMed Central

    Purdon, Patrick L.; Millan, Hernan; Fuller, Peter L.; Bonmassar, Giorgio

    2008-01-01

    Simultaneous recording of electrophysiology and functional magnetic resonance imaging (fMRI) is a technique of growing importance in neuroscience. Rapidly evolving clinical and scientific requirements have created a need for hardware and software that can be customized for specific applications. Hardware may require customization to enable a variety of recording types (e.g., electroencephalogram, local field potentials, or multi-unit activity) while meeting the stringent and costly requirements of MRI safety and compatibility. Real-time signal processing tools are an enabling technology for studies of learning, attention, sleep, epilepsy, neurofeedback, and neuropharmacology, yet real-time signal processing tools are difficult to develop. We describe an open source system for simultaneous electrophysiology and fMRI featuring low-noise (< 0.6 uV p-p input noise), electromagnetic compatibility for MRI (tested up to 7 Tesla), and user-programmable real-time signal processing. The hardware distribution provides the complete specifications required to build an MRI-compatible electrophysiological data acquisition system, including circuit schematics, print circuit board (PCB) layouts, Gerber files for PCB fabrication and robotic assembly, a bill of materials with part numbers, data sheets, and vendor information, and test procedures. The software facilitates rapid implementation of real-time signal processing algorithms. This system has used in human EEG/fMRI studies at 3 and 7 Tesla examining the auditory system, visual system, sleep physiology, and anesthesia, as well as in intracranial electrophysiological studies of the non-human primate visual system during 3 Tesla fMRI, and in human hyperbaric physiology studies at depths of up to 300 feet below sea level. PMID:18761038

  9. An open-source hardware and software system for acquisition and real-time processing of electrophysiology during high field MRI.

    PubMed

    Purdon, Patrick L; Millan, Hernan; Fuller, Peter L; Bonmassar, Giorgio

    2008-11-15

    Simultaneous recording of electrophysiology and functional magnetic resonance imaging (fMRI) is a technique of growing importance in neuroscience. Rapidly evolving clinical and scientific requirements have created a need for hardware and software that can be customized for specific applications. Hardware may require customization to enable a variety of recording types (e.g., electroencephalogram, local field potentials, or multi-unit activity) while meeting the stringent and costly requirements of MRI safety and compatibility. Real-time signal processing tools are an enabling technology for studies of learning, attention, sleep, epilepsy, neurofeedback, and neuropharmacology, yet real-time signal processing tools are difficult to develop. We describe an open-source system for simultaneous electrophysiology and fMRI featuring low-noise (<0.6microV p-p input noise), electromagnetic compatibility for MRI (tested up to 7T), and user-programmable real-time signal processing. The hardware distribution provides the complete specifications required to build an MRI-compatible electrophysiological data acquisition system, including circuit schematics, print circuit board (PCB) layouts, Gerber files for PCB fabrication and robotic assembly, a bill of materials with part numbers, data sheets, and vendor information, and test procedures. The software facilitates rapid implementation of real-time signal processing algorithms. This system has been used in human EEG/fMRI studies at 3 and 7T examining the auditory system, visual system, sleep physiology, and anesthesia, as well as in intracranial electrophysiological studies of the non-human primate visual system during 3T fMRI, and in human hyperbaric physiology studies at depths of up to 300 feet below sea level.

  10. Quantum Heterogeneous Computing for Satellite Positioning Optimization

    NASA Astrophysics Data System (ADS)

    Bass, G.; Kumar, V.; Dulny, J., III

    2016-12-01

    Hard optimization problems occur in many fields of academic study and practical situations. We present results in which quantum heterogeneous computing is used to solve a real-world optimization problem: satellite positioning. Optimization problems like this can scale very rapidly with problem size, and become unsolvable with traditional brute-force methods. Typically, such problems have been approximately solved with heuristic approaches; however, these methods can take a long time to calculate and are not guaranteed to find optimal solutions. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. There are now commercially available quantum annealing (QA) devices that are designed to solve difficult optimization problems. These devices have 1000+ quantum bits, but they have significant hardware size and connectivity limitations. We present a novel heterogeneous computing stack that combines QA and classical machine learning and allows the use of QA on problems larger than the quantum hardware could solve in isolation. We begin by analyzing the satellite positioning problem with a heuristic solver, the genetic algorithm. The classical computer's comparatively large available memory can explore the full problem space and converge to a solution relatively close to the true optimum. The QA device can then evolve directly to the optimal solution within this more limited space. Preliminary experiments, using the Quantum Monte Carlo (QMC) algorithm to simulate QA hardware, have produced promising results. Working with problem instances with known global minima, we find a solution within 8% in a matter of seconds, and within 5% in a few minutes. Future studies include replacing QMC with commercially available quantum hardware and exploring more problem sets and model parameters. Our results have important implications for how heterogeneous quantum computing can be used to solve difficult optimization problems in any field.

  11. Scaling Retro-Commissioning to Small Commercial Buildings: A Turnkey Automated Hardware-Software Solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Guanjing; Granderson, J.; Brambley, Michael R.

    2015-07-01

    In the United States, small commercial buildings represent 51% of total floor space of all commercial buildings and consume nearly 3 quadrillion Btu (3.2 quintillion joule) of site energy annually, presenting an enormous opportunity for energy savings. Retro-commissioning (RCx), the process through which professional energy service providers identify and correct operational problems, has proven to be a cost-effective means to achieve median energy savings of 16%. However, retro-commissioning is not typically conducted at scale throughout the commercial stock. Very few small commercial buildings are retro-commissioned because utility expenses are relatively modest, margins are tighter, and capital for improvements is limited.more » In addition, small buildings do not have in-house staff with the expertise to identify improvement opportunities. In response, a turnkey hardware-software solution was developed to enable cost-effective, monitoring-based RCx of small commercial buildings. This highly tailored solution enables non-commissioning providers to identify energy and comfort problems, as well as associated cost impacts and remedies. It also facilitates scale by offering energy service providers the means to streamline their existing processes and reduce costs by more than half. The turnkey RCx sensor suitcase consists of two primary components: a suitcase of sensors for short-term building data collection that guides users through the process of deploying and retrieving their data and a software application that automates analysis of sensor data, identifies problems and generates recommendations. This paper presents the design and testing of prototype models, including descriptions of the hardware design, analysis algorithms, performance testing, and plans for dissemination.« less

  12. A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method.

    PubMed

    Tuta, Jure; Juric, Matjaz B

    2016-12-06

    This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments-some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models-free space path loss and ITU models-which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2-3 and 3-4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements.

  13. A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method

    PubMed Central

    Tuta, Jure; Juric, Matjaz B.

    2016-01-01

    This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments—some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models—free space path loss and ITU models—which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2–3 and 3–4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements. PMID:27929453

  14. Best Practices for the Security of Radioactive Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coulter, D.T.; Musolino, S.

    2009-05-01

    This work is funded under a grant provided by the US Department of Health and Human Services, Centers for Disease Control. The Department of Health and Mental Hygiene (DOHMH) awarded a contract to Brookhaven National Laboratory (BNL) to develop best practices guidance for Office of Radiological Health (ORH) licensees to increase on-site security to deter and prevent theft of radioactive materials (RAM). The purpose of this document is to describe best practices available to manage the security of radioactive materials in medical centers, hospitals, and research facilities. There are thousands of such facilities in the United States, and recent studiesmore » suggest that these materials may be vulnerable to theft or sabotage. Their malevolent use in a radiological-dispersion device (RDD), viz., a dirty bomb, can have severe environmental- and economic- impacts, the associated area denial, and potentially large cleanup costs, as well as other effects on the licensees and the public. These issues are important to all Nuclear Regulatory Commission and Agreement State licensees, and to the general public. This document outlines approaches for the licensees possessing these materials to undertake security audits to identify vulnerabilities in how these materials are stored or used, and to describe best practices to upgrade or enhance their security. Best practices can be described as the most efficient (least amount of effort/cost) and effective (best results) way of accomplishing a task and meeting an objective, based on repeatable procedures that have proven themselves over time for many people and circumstances. Best practices within the security industry include information security, personnel security, administrative security, and physical security. Each discipline within the security industry has its own 'best practices' that have evolved over time into common ones. With respect to radiological devices and radioactive-materials security, industry best practices encompass both physical security (hardware and engineering) and administrative procedures. Security regimes for these devices and materials typically use a defense-in-depth- or layered-security approach to eliminate single points of failure. The Department of Energy, the Department of Homeland Security, the Department of Defense, the American Society of Industrial Security (ASIS), the Security Industry Association (SIA) and Underwriters Laboratory (UL) all rovide design guidance and hardware specifications. With a graded approach, a physical-security specialist can tailor an integrated security-management system in the most appropriate cost-effective manner to meet the regulatory and non-regulatory requirements of the licensee or client.« less

  15. Polymeric reaction between aldehyde group in furfural and phenolic derivatives from liquefaction of oil palm empty fruit bunch fiber as phenol-furfural resin

    NASA Astrophysics Data System (ADS)

    Masli, M. Z.; Zakaria, S.; Chia, C. H.; Roslan, R.

    2016-11-01

    Resinification of liquefied empty fruit bunch with furfural (LEFB-Fu) was performed. During the resinification process, the samples were taken every hour up to 4 hours. FTIR analysis of the samples was conducted to understand the progress of the reaction. It showed that the bands of 1512 cm-1 and 1692 cm-1 evolving and diminishing respectively, indicating the consumption of furfural. The postulation of polymerization was also proven as the increasing extent of substitution of aromatic ring observed.

  16. Adenoid cystic carcinoma of the nasopharynx after previous adenoid irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sofferman, R.A.; Heisse, J.W. Jr.

    1985-04-01

    In 1978, Pratt challenged the otolaryngology community to identify an incidence of malignancy in individuals who have previously received radium therapy to the nasopharyngeal lymphoid tissues. This case report is a direct response to that quest and presents a well documented adenoid cystic carcinoma evolving 23 years after radium applicator treatment to the fossa of Rosenmuller. Although a cause-and-effect relationship cannot be scientifically proven, the case history raises several important questions concerning the stimulating effects of radiation on the later onset of frank malignancy.

  17. Polyimide composites: Application histories

    NASA Technical Reports Server (NTRS)

    Poveromo, L. M.

    1985-01-01

    Advanced composite hardware exposed to thermal environments above 127 C (260 F) must be fabricated from materials having resin matrices whose thermal/moisture resistance is superior to that of conventional epoxy-matrix systems. A family of polyimide resins has evolved in the last 10 years that exhibits the thermal-oxidative stability required for high-temperature technology applications. The weight and structural benefits for organic-matrix composites can now be extended by designers and materials engineers to include structures exposed to 316 F (600 F). Polyimide composite materials are now commercially available that can replace metallic or epoxy composite structures in a wide range of aerospace applications.

  18. Extending the boundaries of reverse engineering

    NASA Astrophysics Data System (ADS)

    Lawrie, Chris

    2002-04-01

    In today's market place the potential of Reverse Engineering as a time compression tool is commonly lost under its traditional definition. The term Reverse Engineering was coined way back at the advent of CMM machines and 3D CAD systems to describe the process of fitting surfaces to captured point data. Since these early beginnings, downstream hardware scanning and digitising systems have evolved in parallel with an upstream demand, greatly increasing the potential of a point cloud data set within engineering design and manufacturing processes. The paper will discuss the issues surrounding Reverse Engineering at the turn of the millennium.

  19. Atmospheric, Magnetospheric and Plasmas in Space (AMPS) spacelab payload definition study - program analysis and planning for phase C/D document - Volume 7

    NASA Technical Reports Server (NTRS)

    Keeley, J. T.

    1976-01-01

    Typical missions identified for AMPS flights in the arly 1980's are described. Experiment objectives and typical scientific instruments selected to accomplish these objectives are discussed along with mission requirements and shuttle and Spacelab capabilities assessed to determine any AMPS unique requirements. Preliminary design concepts for the first two AMPS flights form the basis for the Phase C/D program plan. This plan implements flights 1 and 2 and indicates how both the scientific and flight support hardware can be systematically evolved for future AMPS flights.

  20. HACC: Extreme Scaling and Performance Across Diverse Architectures

    NASA Astrophysics Data System (ADS)

    Habib, Salman; Morozov, Vitali; Frontiere, Nicholas; Finkel, Hal; Pope, Adrian; Heitmann, Katrin

    2013-11-01

    Supercomputing is evolving towards hybrid and accelerator-based architectures with millions of cores. The HACC (Hardware/Hybrid Accelerated Cosmology Code) framework exploits this diverse landscape at the largest scales of problem size, obtaining high scalability and sustained performance. Developed to satisfy the science requirements of cosmological surveys, HACC melds particle and grid methods using a novel algorithmic structure that flexibly maps across architectures, including CPU/GPU, multi/many-core, and Blue Gene systems. We demonstrate the success of HACC on two very different machines, the CPU/GPU system Titan and the BG/Q systems Sequoia and Mira, attaining unprecedented levels of scalable performance. We demonstrate strong and weak scaling on Titan, obtaining up to 99.2% parallel efficiency, evolving 1.1 trillion particles. On Sequoia, we reach 13.94 PFlops (69.2% of peak) and 90% parallel efficiency on 1,572,864 cores, with 3.6 trillion particles, the largest cosmological benchmark yet performed. HACC design concepts are applicable to several other supercomputer applications.

  1. The evolvability of programmable hardware.

    PubMed

    Raman, Karthik; Wagner, Andreas

    2011-02-06

    In biological systems, individual phenotypes are typically adopted by multiple genotypes. Examples include protein structure phenotypes, where each structure can be adopted by a myriad individual amino acid sequence genotypes. These genotypes form vast connected 'neutral networks' in genotype space. The size of such neutral networks endows biological systems not only with robustness to genetic change, but also with the ability to evolve a vast number of novel phenotypes that occur near any one neutral network. Whether technological systems can be designed to have similar properties is poorly understood. Here we ask this question for a class of programmable electronic circuits that compute digital logic functions. The functional flexibility of such circuits is important in many applications, including applications of evolutionary principles to circuit design. The functions they compute are at the heart of all digital computation. We explore a vast space of 10(45) logic circuits ('genotypes') and 10(19) logic functions ('phenotypes'). We demonstrate that circuits that compute the same logic function are connected in large neutral networks that span circuit space. Their robustness or fault-tolerance varies very widely. The vicinity of each neutral network contains circuits with a broad range of novel functions. Two circuits computing different functions can usually be converted into one another via few changes in their architecture. These observations show that properties important for the evolvability of biological systems exist in a commercially important class of electronic circuitry. They also point to generic ways to generate fault-tolerant, adaptable and evolvable electronic circuitry.

  2. The evolvability of programmable hardware

    PubMed Central

    Raman, Karthik; Wagner, Andreas

    2011-01-01

    In biological systems, individual phenotypes are typically adopted by multiple genotypes. Examples include protein structure phenotypes, where each structure can be adopted by a myriad individual amino acid sequence genotypes. These genotypes form vast connected ‘neutral networks’ in genotype space. The size of such neutral networks endows biological systems not only with robustness to genetic change, but also with the ability to evolve a vast number of novel phenotypes that occur near any one neutral network. Whether technological systems can be designed to have similar properties is poorly understood. Here we ask this question for a class of programmable electronic circuits that compute digital logic functions. The functional flexibility of such circuits is important in many applications, including applications of evolutionary principles to circuit design. The functions they compute are at the heart of all digital computation. We explore a vast space of 1045 logic circuits (‘genotypes’) and 1019 logic functions (‘phenotypes’). We demonstrate that circuits that compute the same logic function are connected in large neutral networks that span circuit space. Their robustness or fault-tolerance varies very widely. The vicinity of each neutral network contains circuits with a broad range of novel functions. Two circuits computing different functions can usually be converted into one another via few changes in their architecture. These observations show that properties important for the evolvability of biological systems exist in a commercially important class of electronic circuitry. They also point to generic ways to generate fault-tolerant, adaptable and evolvable electronic circuitry. PMID:20534598

  3. The Core Avionics System for the DLR Compact-Satellite Series

    NASA Astrophysics Data System (ADS)

    Montenegro, S.; Dittrich, L.

    2008-08-01

    The Standard Satellite Bus's core avionics system is a further step in the development line of the software and hardware architecture which was first used in the bispectral infrared detector mission (BIRD). The next step improves dependability, flexibility and simplicity of the whole core avionics system. Important aspects of this concept were already implemented, simulated and tested in other ESA and industrial projects. Therefore we can say the basic concept is proven. This paper deals with different aspects of core avionics development and proposes an extension to the existing core avionics system of BIRD to meet current and future requirements regarding flexibility, availability, reliability of small satellite and the continuous increasing demand of mass memory and computational power.

  4. Effect of data compression on diagnostic accuracy in digital hand and chest radiography

    NASA Astrophysics Data System (ADS)

    Sayre, James W.; Aberle, Denise R.; Boechat, Maria I.; Hall, Theodore R.; Huang, H. K.; Ho, Bruce K. T.; Kashfian, Payam; Rahbar, Guita

    1992-05-01

    Image compression is essential to handle a large volume of digital images including CT, MR, CR, and digitized films in a digital radiology operation. The full-frame bit allocation using the cosine transform technique developed during the last few years has been proven to be an excellent irreversible image compression method. This paper describes the effect of using the hardware compression module on diagnostic accuracy in hand radiographs with subperiosteal resorption and chest radiographs with interstitial disease. Receiver operating characteristic analysis using 71 hand radiographs and 52 chest radiographs with five observers each demonstrates that there is no statistical significant difference in diagnostic accuracy between the original films and the compressed images with a compression ratio as high as 20:1.

  5. Mars One; creating a human settlement on Mars

    NASA Astrophysics Data System (ADS)

    Wielders, A.; Lansdorp, B.; Flinkenflögel, S.; Versteeg, B.; Kraft, N.; Vaandrager, E.; Wagensveld, M.; Dogra, A.; Casagrande, B.; Aziz, N.

    2013-09-01

    Mars One will take humanity to Mars in 2023, to establish a permanent settlement from which human kind will prosper, learn, and grow. Before the first crew lands, Mars One will have established a habitable, sustainable outpost designed to receive new astronauts every two years. To accomplish this, Mars One has developed a precise, realistic plan based entirely upon proven technologies. It is both economically and logistically feasible, and already underway with the aggregation and appointment of hardware suppliers and experts in space exploration. In this paper Mars One discusses the benefits of the mission for planetary science in general and Mars studies in particular. Furthermore potential contributions from the planetary community to the Mars One project will be identified.

  6. Protyping machine vision software on the World Wide Web

    NASA Astrophysics Data System (ADS)

    Karantalis, George; Batchelor, Bruce G.

    1998-10-01

    Interactive image processing is a proven technique for analyzing industrial vision applications and building prototype systems. Several of the previous implementations have used dedicated hardware to perform the image processing, with a top layer of software providing a convenient user interface. More recently, self-contained software packages have been devised and these run on a standard computer. The advent of the Java programming language has made it possible to write platform-independent software, operating over the Internet, or a company-wide Intranet. Thus, there arises the possibility of designing at least some shop-floor inspection/control systems, without the vision engineer ever entering the factories where they will be used. It successful, this project will have a major impact on the productivity of vision systems designers.

  7. Study of an astronomical extreme ultraviolet rocket spectrometer for use on shuttle missions

    NASA Technical Reports Server (NTRS)

    Bowyer, C. S.

    1977-01-01

    The adaptation of an extreme ultraviolet astronomy rocket payload for flight on the shuttle was studied. A sample payload for determining integration and flight procedures for experiments which may typically be flown on shuttle missions was provided. The electrical, mechanical, thermal, and operational interface requirements between the payload and the orbiter were examined. Of particular concern was establishing a baseline payload accommodation which utilizes proven common hardware for electrical, data, command, and possibly real time monitoring functions. The instrument integration and checkout procedures necessary to assure satisfactory in-orbit instrument performance were defined and those procedures which can be implemented in such a way as to minimize their impact on orbiter integration schedules were identified.

  8. Launch and Commissioning of the Deep Space Climate Observatory

    NASA Technical Reports Server (NTRS)

    Frey, Nicholas P.; Davis, Edward P.

    2016-01-01

    The Deep Space Climate Observatory (DSCOVR), formerly known as Triana, successfully launched on February 11th, 2015. To date, each of the five space-craft attitude control system (ACS) modes have been operating as expected and meeting all guidance, navigation, and control (GN&C) requirements, although since launch, several anomalies were encountered. While unplanned, these anomalies have proven to be invaluable in developing a deeper understanding of the ACS, and drove the design of three alterations to the ACS task of the flight software (FSW). An overview of the GN&C subsystem hardware, including re-furbishment, and ACS architecture are introduced, followed by a chronological discussion of key events, flight performance, as well as anomalies encountered by the GN&C team.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbara Chapman

    OpenMP was not well recognized at the beginning of the project, around year 2003, because of its limited use in DoE production applications and the inmature hardware support for an efficient implementation. Yet in the recent years, it has been graduately adopted both in HPC applications, mostly in the form of MPI+OpenMP hybrid code, and in mid-scale desktop applications for scientific and experimental studies. We have observed this trend and worked deligiently to improve our OpenMP compiler and runtimes, as well as to work with the OpenMP standard organization to make sure OpenMP are evolved in the direction close tomore » DoE missions. In the Center for Programming Models for Scalable Parallel Computing project, the HPCTools team at the University of Houston (UH), directed by Dr. Barbara Chapman, has been working with project partners, external collaborators and hardware vendors to increase the scalability and applicability of OpenMP for multi-core (and future manycore) platforms and for distributed memory systems by exploring different programming models, language extensions, compiler optimizations, as well as runtime library support.« less

  10. History and Benefits of Engine Level Testing Throughout the Space Shuttle Main Engine Program

    NASA Technical Reports Server (NTRS)

    VanHooser, Katherine; Kan, Kenneth; Maddux, Lewis; Runkle, Everett

    2010-01-01

    Rocket engine testing is important throughout a program s life and is essential to the overall success of the program. Space Shuttle Main Engine (SSME) testing can be divided into three phases: development, certification, and operational. Development tests are conducted on the basic design and are used to develop safe start and shutdown transients and to demonstrate mainstage operation. This phase helps form the foundation of the program, demands navigation of a very steep learning curve, and yields results that shape the final engine design. Certification testing involves multiple engine samples and more aggressive test profiles that explore the boundaries of the engine to vehicle interface requirements. The hardware being tested may have evolved slightly from that in the development phase. Operational testing is conducted with mature hardware and includes acceptance testing of flight assets, resolving anomalies that occur in flight, continuing to expand the performance envelope, and implementing design upgrades. This paper will examine these phases of testing and their importance to the SSME program. Examples of tests conducted in each phase will also be presented.

  11. Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Jin, Haoqiang; anMey, Dieter; Hatay, Ferhat F.

    2003-01-01

    With the advent of parallel hardware and software technologies users are faced with the challenge to choose a programming paradigm best suited for the underlying computer architecture. With the current trend in parallel computer architectures towards clusters of shared memory symmetric multi-processors (SMP), parallel programming techniques have evolved to support parallelism beyond a single level. Which programming paradigm is the best will depend on the nature of the given problem, the hardware architecture, and the available software. In this study we will compare different programming paradigms for the parallelization of a selected benchmark application on a cluster of SMP nodes. We compare the timings of different implementations of the same CFD benchmark application employing the same numerical algorithm on a cluster of Sun Fire SMP nodes. The rest of the paper is structured as follows: In section 2 we briefly discuss the programming models under consideration. We describe our compute platform in section 3. The different implementations of our benchmark code are described in section 4 and the performance results are presented in section 5. We conclude our study in section 6.

  12. A historical perspective of remote operations and robotics in nuclear facilities. Robotics and Intelligent Systems Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herndon, J.N.

    1992-12-31

    The field of remote technology is continuing to evolve to support man`s efforts to perform tasks in hostile environments. The technology which we recognize today as remote technology has evolved over the last 45 years to support human operations in hostile environments such as nuclear fission and fusion, space, underwater, hazardous chemical, and hazardous manufacturing. The four major categories of approach to remote technology have been (1) protective clothing and equipment for direct human entry, (2) extended reach tools using distance for safety, (3) telemanipulators with barriers for safety, and (4) teleoperators incorporating mobility with distance and/or barriers for safety.more » The government and commercial nuclear industry has driven the development of the majority of the actual teleoperator hardware available today. This hardware has been developed largely due to the unsatisfactory performance of the protective-clothing approach in many hostile applications. Manipulation systems which have been developed include crane/impact wrench systems, unilateral power manipulators, mechanical master/slaves, and servomanipulators. Viewing systems have included periscopes, shield windows, and television systems. Experience over the past 45 years indicates that maintenance system flexibility is essential to typical repair tasks because they are usually not repetitive, structured, or planned. Fully remote design (manipulation, task provisions, remote tooling, and facility synergy) is essential to work task efficiency. Work for space applications has been primarily research oriented with relatively few successful space applications, although the shuttle`s remote manipulator system has been quite successful. In the last decade, underwater applications have moved forward significantly, with the offshore oil industry and military applications providing the primary impetus.« less

  13. High Speed, Low Cost Telemetry Access from Space Development Update on Programmable Ultra Lightweight System Adaptable Radio (PULSAR)

    NASA Technical Reports Server (NTRS)

    Simms, William Herbert, III; Varnavas, Kosta; Eberly, Eric

    2014-01-01

    Software Defined Radio (SDR) technology has been proven in the commercial sector since the early 1990's. Today's rapid advancement in mobile telephone reliability and power management capabilities exemplifies the effectiveness of the SDR technology for the modern communications market. In contrast, the foundations of transponder technology presently qualified for satellite applications were developed during the early space program of the 1960's. Conventional transponders are built to a specific platform and must be redesigned for every new bus while the SDR is adaptive in nature and can fit numerous applications with no hardware modifications. A SDR uses a minimum amount of analog / Radio Frequency (RF) components to up/down-convert the RF signal to/from a digital format. Once the signal is digitized, all processing is performed using hardware or software logic. Typical SDR digital processes include; filtering, modulation, up/down converting and demodulation. NASA Marshall Space Flight Center (MSFC) Programmable Ultra Lightweight System Adaptable Radio (PULSAR) leverages existing MSFC SDR designs and commercial sector enhanced capabilities to provide a path to a radiation tolerant SDR transponder. These innovations (1) reduce the cost of NASA Low Earth Orbit (LEO) and Deep Space standard transponders, (2) decrease power requirements, and (3) commensurately reduce volume. A second pay-off is the increased SDR flexibility by allowing the same hardware to implement multiple transponder types simply by altering hardware logic - no change of hardware is required - all of which will ultimately be accomplished in orbit. Development of SDR technology for space applications will provide a highly capable, low cost transponder to programs of all sizes. The MSFC PULSAR Project results in a Technology Readiness Level (TRL) 7 low-cost telemetry system available to Smallsat and CubeSat missions, as well as other platforms. This paper documents the continued development and verification/validation of the MSFC SDR, called PULSAR, which contributes to advancing the state-of-the-art in transponder design - directly applicable to the SmallSat and CubeSat communities. This paper focuses on lessons learned on the first sub-orbital flight (high altitude balloon) and the follow-on steps taken to validate PULSAR. A sounding rocket launch, currently planned for 03/2015, will further expose PULSAR to the high dynamics of sub-orbital flights. Future opportunities for orbiting satellite incorporation reside in the small satellite missions (FASTSat, CubeSat. etc.).

  14. Neural Networks for Flight Control

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1996-01-01

    Neural networks are being developed at NASA Ames Research Center to permit real-time adaptive control of time varying nonlinear systems, enhance the fault-tolerance of mission hardware, and permit online system reconfiguration. In general, the problem of controlling time varying nonlinear systems with unknown structures has not been solved. Adaptive neural control techniques show considerable promise and are being applied to technical challenges including automated docking of spacecraft, dynamic balancing of the space station centrifuge, online reconfiguration of damaged aircraft, and reducing cost of new air and spacecraft designs. Our experiences have shown that neural network algorithms solved certain problems that conventional control methods have been unable to effectively address. These include damage mitigation in nonlinear reconfiguration flight control, early performance estimation of new aircraft designs, compensation for damaged planetary mission hardware by using redundant manipulator capability, and space sensor platform stabilization. This presentation explored these developments in the context of neural network control theory. The discussion began with an overview of why neural control has proven attractive for NASA application domains. The more important issues in control system development were then discussed with references to significant technical advances in the literature. Examples of how these methods have been applied were given, followed by projections of emerging application needs and directions.

  15. Flying U.S. science on the U.S.S.R. Cosmos biosatellites

    NASA Technical Reports Server (NTRS)

    Ballard, R. W.; Rossberg Walker, K.

    1992-01-01

    The USSR Cosmos Biosatellites are unmanned missions with durations of approximately 14 days. They are capable of carrying a wide variety of biological specimens such as cells, tissues, plants, and animals, including rodents and rhesus monkeys. The absence of a crew is an advantage with respect to the use of radioisotopes or other toxic materials and contaminants, but a disadvantage with respect to the performance of inflight procedures or repair of hardware failures. Thus, experiments hardware and procedures must be either completely automated or remotely controlled from the ground. A serious limiting factor for experiments is the amount of electrical powers available, so when possible experiments should be self-contained with their own batteries and data recording devices. Late loading is restricted to approximately 48 hours before launch and access time upon recovery is not precise since there is a ballistic reentry and the capsule must first be located and recovery vehicles dispatched to the site. Launches are quite reliable and there is a proven track record of nine previous Biosatellite flights. This paper will present data and experience from the seven previous Cosmos flights in which the US has participated as well as the key areas of consideration in planning a flight investigation aboard this Biosatellite platform.

  16. Programmable Ultra Lightweight System Adaptable Radio (PULSAR) Low Cost Telemetry - Access from Space Advanced Technologies or Down the Middle

    NASA Technical Reports Server (NTRS)

    Sims. Herb; Varnavas, Kosta; Eberly, Eric

    2013-01-01

    Software Defined Radio (SDR) technology has been proven in the commercial sector since the early 1990's. Today's rapid advancement in mobile telephone reliability and power management capabilities exemplifies the effectiveness of the SDR technology for the modern communications market. In contrast, presently qualified satellite transponder applications were developed during the early 1960's space program. Programmable Ultra Lightweight System Adaptable Radio (PULSAR, NASA-MSFC SDR) technology revolutionizes satellite transponder technology by increasing data through-put capability by, at least, an order of magnitude. PULSAR leverages existing Marshall Space Flight Center SDR designs and commercially enhanced capabilities to provide a path to a radiation tolerant SDR transponder. These innovations will (1) reduce the cost of NASA Low Earth Orbit (LEO) and Deep Space transponders, (2) decrease power requirements, and (3) a commensurate volume reduction. Also, PULSAR increases flexibility to implement multiple transponder types by utilizing the same hardware with altered logic - no analog hardware change is required - all of which can be accomplished in orbit. This provides high capability, low cost, transponders to programs of all sizes. The final project outcome would be the introduction of a Technology Readiness Level (TRL) 7 low-cost CubeSat to SmallSat telemetry system into the NASA Portfolio.

  17. Large angle magnetic suspension test fixture

    NASA Technical Reports Server (NTRS)

    Britcher, Colin P.

    1994-01-01

    Over the past few decades, research has proven the feasibility of the concept of noncontacting magnetic bearing systems which operate with no wear or vibration. As a result, magnetic bearing systems are beginning to emerge as integral parts of modern industrial and aerospace systems. Further applications research is still required. NASA has loaned an existing magnetic bearing device, the Annular Suspension and Pointing System (ASPS), to ODU to permit student design teams the opportunity to pursue some of these studies. The ASPS is a protoype for a high-accuracy space payload pointing and vibration isolation system. The project objectives are to carry out modifications and improvements to the ASPS hardware to permit recommissioning in a 1-g (ground-based) environment. Following recommissioning, new applications will be studied and demonstrated, including a rotary joint for solar panels. The first teams designed and manufactured pole shims to reduce the air-gaps and raise the vertical force capability as well as on control system studies. The most recent team concentrated on the operation of a single bearing station, which was successfully accomplished with a PC-based digital controller. The paper will review the history and technical background of the ASPS hardware, followed by presentation of the progress made and the current status of the project.

  18. The Stretched Lens Array (SLA): A Low-Risk, Cost-Effective Array Offering Wing-Level Performance of 180 W/KG and 300 W/M2 at 300 VDC

    NASA Technical Reports Server (NTRS)

    ONeill, Mark; Piszczor, Michael F.; Eskenazi, Michael I.; McDanal, A. J.; George, Patrick J.; Botke, Matthew M.; Brandhorst, Henry W.; Edwards, David L.; Jaster, Paul A.; Lyons, Valerie J. (Technical Monitor)

    2002-01-01

    At IECEC 2001, our team presented a paper on the new stretched lens array (SLA), including its evolution from the successful SCARLET array on the NASA/JPL Deep Space 1 spacecraft. Since that conference, the SLA team has made significant advances in the SLA technology, including component-level improvements, array-level optimization, space environment exposure testing, and prototype hardware fabrication and evaluation. This paper describes the evolved version of the SLA, highlighting recent improvements in the lens, solar cell, photovoltaic receiver, rigid panel structure, and complete solar array wing.

  19. Apollo experience report: Power generation system

    NASA Technical Reports Server (NTRS)

    Bell, D., III; Plauche, F. M.

    1973-01-01

    A comprehensive review of the design philosophy and experience of the Apollo electrical power generation system is presented. The review of the system covers a period of 8 years, from conception through the Apollo 12 lunar-landing mission. The program progressed from the definition phase to hardware design, system development and qualification, and, ultimately, to the flight phase. Several problems were encountered; however, a technology evolved that enabled resolution of the problems and resulted in a fully manrated power generation system. These problems are defined and examined, and the corrective action taken is discussed. Several recommendations are made to preclude similar occurrences and to provide a more reliable fuel-cell power system.

  20. Continuing Development for Free-Piston Stirling Space Power Systems

    NASA Astrophysics Data System (ADS)

    Peterson, Allen A.; Qiu, Songgang; Redinger, Darin L.; Augenblick, John E.; Petersen, Stephen L.

    2004-02-01

    Long-life radioisotope power generators based on free-piston Stirling engines are an energy-conversion solution for future space applications. The high efficiency of Stirling machines makes them more attractive than the thermoelectric generators currently used in space. Stirling Technology Company (STC) has been developing free-piston Stirling machines for over 30 years, and its family of Stirling generators is ideally suited for reliable, maintenance-free operation. This paper describes recent progress and status of the STC RemoteGen™ 55 W-class Stirling generator (RG-55), presents an overview of recent testing, and discusses how the technology demonstration design has evolved toward space-qualified hardware.

  1. Tailoring lumazine synthase assemblies for bionanotechnology.

    PubMed

    Azuma, Yusuke; Edwardson, Thomas G W; Hilvert, Donald

    2018-05-21

    Nanoscale compartments formed by hierarchical protein self-assembly are valuable platforms for nanotechnology development. The well-defined structure and broad chemical functionality of protein cages, as well as their amenability to genetic and chemical modification, have enabled their repurposing for diverse applications. In this review, we summarize progress in the engineering of the cage-forming enzyme lumazine synthase. This bacterial nanocompartment has proven to be a malleable scaffold. The natural protein has been diversified to afford a family of unique proteinaceous capsules that have been modified, evolved and assembled with other components to produce nanoreactors, artificial organelles, delivery vehicles and virus mimics.

  2. Interstellar Grains: 50 Years On

    NASA Astrophysics Data System (ADS)

    Wickramasinghe, N. Chandra

    2011-12-01

    Our understanding of the nature of interstellar grains has evolved considerably over the past half century with the present author and Fred Hoyle being intimately involved at several key stages of progress. The currently fashionable graphite-silicate-organic grain model has all its essential aspects unequivocally traceable to original peer-reviewedpublicationsbytheauthorand/orFredHoyle. Theprevailingreluctancetoaccepttheseclear-cut priorities may be linked to our further work that argued for interstellar grains and organics to have a biological provenance - a position perceived as heretical. The biological model, however, continues to provide a powerful unifying hypothesis for a vast amount of otherwise disconnected and disparate astronomical data.

  3. On acquisition of programming knowledge

    NASA Technical Reports Server (NTRS)

    Amin, Ashok T.

    1987-01-01

    For the evolving discipline of programming, acquisition of programming knowledge is a difficult issue. Common knowledge results from the acceptance of proven techniques based on results of formal inquiries into the nature of the programming process. This is a rather slow process. In addition, the vast body of common knowledge needs to be explicated to a low enough level of details for it to be represented in the machine processable form. It is felt that this is an impediment to the progress of automatic programming. The importance of formal approaches cannot be overstated since their contributions lead to quantum leaps in the state of the art.

  4. An alternative way to track the hot money in turbulent times

    NASA Astrophysics Data System (ADS)

    Sensoy, Ahmet

    2015-02-01

    During recent years, networks have proven to be an efficient way to characterize and investigate a wide range of complex financial systems. In this study, we first obtain the dynamic conditional correlations between filtered exchange rates (against US dollar) of several countries and introduce a time-varying threshold correlation level to define dynamic strong correlations between these exchange rates. Then, using evolving networks obtained from strong correlations, we propose an alternative approach to track the hot money in turbulent times. The approach is demonstrated for the time period including the financial turmoil of 2008. Other applications are also discussed.

  5. Proven Innovations and New Initiatives in Ground System Development

    NASA Technical Reports Server (NTRS)

    Gunn, Jody M.

    2006-01-01

    The state-of-the-practice for engineering and development of Ground Systems has evolved significantly over the past half decade. Missions that challenge ground system developers with significantly reduced budgets in spite of requirements for greater and previously unimagined functionality are now the norm. Making the right trades early in the mission lifecycle is one of the key factors to minimizing ground system costs. The Mission Operations Strategic Leadership Team at the Jet Propulsion Laboratory has spent the last year collecting and working through successes and failures in ground systems for application to future missions.

  6. Proven Innovations and New Initiatives in Ground System Development: Reducing Costs in the Ground System

    NASA Technical Reports Server (NTRS)

    Gunn, Jody M.

    2006-01-01

    The state-of-the-practice for engineering and development of Ground Systems has evolved significantly over the past half decade. Missions that challenge ground system developers with significantly reduced budgets in spite of requirements for greater and previously unimagined functionality are now the norm. Making the right trades early in the mission lifecycle is one of the key factors to minimizing ground system costs. The Mission Operations Strategic Leadership Team at the Jet Propulsion Laboratory has spent the last year collecting and working through successes and failures in ground systems for application to future missions.

  7. Overview of GX launch services by GALEX

    NASA Astrophysics Data System (ADS)

    Sato, Koji; Kondou, Yoshirou

    2006-07-01

    Galaxy Express Corporation (GALEX) is a launch service company in Japan to develop a medium size rocket, GX rocket and to provide commercial launch services for medium/small low Earth orbit (LEO) and Sun synchronous orbit (SSO) payloads with a future potential for small geo-stationary transfer orbit (GTO). It is GALEX's view that small/medium LEO/SSO payloads compose of medium scaled but stable launch market due to the nature of the missions. GX rocket is a two-stage rocket of well flight proven liquid oxygen (LOX)/kerosene booster and LOX/liquid natural gas (LNG) upper stage. This LOX/LNG propulsion under development by Japan's Aerospace Exploration Agency (JAXA), is robust with comparable performance as other propulsions and have future potential for wider application such as exploration programs. GX rocket is being developed through a joint work between the industries and GX rocket is applying a business oriented approach in order to realize competitive launch services for which well flight proven hardware and necessary new technology are to be introduced as much as possible. It is GALEX's goal to offer “Easy Access to Space”, a highly reliable and user-friendly launch services with a competitive price. GX commercial launch will start in Japanese fiscal year (JFY) 2007 2008.

  8. Real-time distributed video coding for 1K-pixel visual sensor networks

    NASA Astrophysics Data System (ADS)

    Hanca, Jan; Deligiannis, Nikos; Munteanu, Adrian

    2016-07-01

    Many applications in visual sensor networks (VSNs) demand the low-cost wireless transmission of video data. In this context, distributed video coding (DVC) has proven its potential to achieve state-of-the-art compression performance while maintaining low computational complexity of the encoder. Despite their proven capabilities, current DVC solutions overlook hardware constraints, and this renders them unsuitable for practical implementations. This paper introduces a DVC architecture that offers highly efficient wireless communication in real-world VSNs. The design takes into account the severe computational and memory constraints imposed by practical implementations on low-resolution visual sensors. We study performance-complexity trade-offs for feedback-channel removal, propose learning-based techniques for rate allocation, and investigate various simplifications of side information generation yielding real-time decoding. The proposed system is evaluated against H.264/AVC intra, Motion-JPEG, and our previously designed DVC prototype for low-resolution visual sensors. Extensive experimental results on various data show significant improvements in multiple configurations. The proposed encoder achieves real-time performance on a 1k-pixel visual sensor mote. Real-time decoding is performed on a Raspberry Pi single-board computer or a low-end notebook PC. To the best of our knowledge, the proposed codec is the first practical DVC deployment on low-resolution VSNs.

  9. Study of sampling systems for comets and Mars

    NASA Technical Reports Server (NTRS)

    Amundsen, R. J.; Clark, B. C.

    1987-01-01

    Several aspects of the techniques that can be applied to acquisition and preservation of samples from Mars and a cometary nucleus were examined. Scientific approaches to sampling, grounded in proven engineering methods are the key to achieving the maximum science value from the sample return mission. If development of these approaches for collecting and preserving does not preceed mission definition, it is likely that only suboptimal techniques will be available because of the constraints of formal schedule timelines and the normal pressure to select only the most conservative and least sophisticated approaches when development has lagged the mission milestones. With a reasonable investment now, before the final mission definition, the sampling approach can become highly developed, ready for implementation, and mature enough to help set the requirements for the mission hardware and its performance.

  10. Astrionics system designers handbook, volume 1

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Hardware elements in new and advanced astrionics system designs are discussed. This cost effective approach has as its goal the reduction of R&D and testing costs through the application of proven and tested astrionics components. The ready availability to the designer of data facts for applicable system components is highly desirable. The astrionics System Designers Handbook has as its objective this documenting of data facts to serve the anticipated requirements of the astrionics system designer. Eleven NASA programs were selected as the reference base for the document. These programs are: ATS-F, ERTS-B, HEAO-A, OSO-I, Viking Orbiter, OAO-C, Skylab AM/MDA, Skylab ATM, Apollo 17 CSM, Apollo 17 LM and Mariner Mars 71. Four subsystems were chosen for documentation: communications, data management, electrical power and guidance, navigation and control.

  11. A verified design of a fault-tolerant clock synchronization circuit: Preliminary investigations

    NASA Technical Reports Server (NTRS)

    Miner, Paul S.

    1992-01-01

    Schneider demonstrates that many fault tolerant clock synchronization algorithms can be represented as refinements of a single proven correct paradigm. Shankar provides mechanical proof that Schneider's schema achieves Byzantine fault tolerant clock synchronization provided that 11 constraints are satisfied. Some of the constraints are assumptions about physical properties of the system and cannot be established formally. Proofs are given that the fault tolerant midpoint convergence function satisfies three of the constraints. A hardware design is presented, implementing the fault tolerant midpoint function, which is shown to satisfy the remaining constraints. The synchronization circuit will recover completely from transient faults provided the maximum fault assumption is not violated. The initialization protocol for the circuit also provides a recovery mechanism from total system failure caused by correlated transient faults.

  12. Character recognition from trajectory by recurrent spiking neural networks.

    PubMed

    Jiangrong Shen; Kang Lin; Yueming Wang; Gang Pan

    2017-07-01

    Spiking neural networks are biologically plausible and power-efficient on neuromorphic hardware, while recurrent neural networks have been proven to be efficient on time series data. However, how to use the recurrent property to improve the performance of spiking neural networks is still a problem. This paper proposes a recurrent spiking neural network for character recognition using trajectories. In the network, a new encoding method is designed, in which varying time ranges of input streams are used in different recurrent layers. This is able to improve the generalization ability of our model compared with general encoding methods. The experiments are conducted on four groups of the character data set from University of Edinburgh. The results show that our method can achieve a higher average recognition accuracy than existing methods.

  13. Nuclear Molecular and Theranostic Imaging for Differentiated Thyroid Cancer

    PubMed Central

    Sheikh, Arif; Polack, Berna; Rodriguez, Yvette; Kuker, Russ

    2017-01-01

    Traditional nuclear medicine is rapidly being transformed by the evolving concepts in molecular imaging and theranostics. The utility of new approaches in differentiated thyroid cancer (DTC) diagnostics and therapy has not been fully appreciated. The clinical information, relevant to disease management and patient care, obtained by scintigraphy is still being underestimated. There has been a trend towards moving away from the use of radioactive iodine (RAI) imaging in the management of the disease. This paradigm shift is supported by the 2015 American Thyroid Association Guidelines (1). A more systematic and comprehensive understanding of disease pathophysiology and imaging methodologies is needed for optimal utilization of different imaging modalities in the management of DTC. There have been significant developments in radiotracer and imaging technology, clinically proven to contribute to the understanding of tumor biology and the clinical assessment of patients with DTC. The research and development in the field continues to evolve, with expected emergence of many novel diagnostic and therapeutic techniques. The role for nuclear imaging applications will continue to evolve and be reconfigured in the changing paradigm. This article aims to review the clinical uses and controversies surrounding the use of scintigraphy, and the information it can provide in assisting in the management and treatment of DTC. PMID:28117289

  14. Next Generation Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Lee, Jimmy; Spencer, Susan; Bryan, Tom; Johnson, Jimmie; Robertson, Bryan

    2008-01-01

    The first autonomous rendezvous and docking in the history of the U.S. Space Program was successfully accomplished by Orbital Express, using the Advanced Video Guidance Sensor (AVGS) as the primary docking sensor. The United States now has a mature and flight proven sensor technology for supporting Crew Exploration Vehicles (CEV) and Commercial Orbital Transport. Systems (COTS) Automated Rendezvous and Docking (AR&D). AVGS has a proven pedigree, based on extensive ground testing and flight demonstrations. The AVGS on the Demonstration of Autonomous Rendezvous Technology (DART)mission operated successfully in "spot mode" out to 2 km. The first generation rendezvous and docking sensor, the Video Guidance Sensor (VGS), was developed and successfully flown on Space Shuttle flights in 1997 and 1998. Parts obsolescence issues prevent the construction of more AVGS. units, and the next generation sensor must be updated to support the CEV and COTS programs. The flight proven AR&D sensor is being redesigned to update parts and add additional. capabilities for CEV and COTS with the development of the Next, Generation AVGS (NGAVGS) at the Marshall Space Flight Center. The obsolete imager and processor are being replaced with new radiation tolerant parts. In addition, new capabilities might include greater sensor range, auto ranging, and real-time video output. This paper presents an approach to sensor hardware trades, use of highly integrated laser components, and addresses the needs of future vehicles that may rendezvous and dock with the International Space Station (ISS) and other Constellation vehicles. It will also discuss approaches for upgrading AVGS to address parts obsolescence, and concepts for minimizing the sensor footprint, weight, and power requirements. In addition, parts selection and test plans for the NGAVGS will be addressed to provide a highly reliable flight qualified sensor. Expanded capabilities through innovative use of existing capabilities will also be discussed.

  15. Body donations today and tomorrow: What is best practice and why?

    PubMed

    Riederer, Beat M

    2016-01-01

    There is considerable agreement that the use of human bodies for teaching and research remains important, yet not all universities use dissection to teach human gross anatomy. The concept of body donation has evolved over centuries and there are still considerable discrepancies among countries regarding the means by which human bodies are acquired and used for education and research. Many countries have well-established donation programs and use body dissection to teach most if not all human gross anatomy. In contrast, there are countries without donation programs that use unclaimed bodies or perhaps a few donated bodies instead. In several countries, use of cadavers for dissection is unthinkable for cultural or religious reasons. Against this background, successful donation programs are highlighted in the present review, emphasizing those aspects of the programs that make them successful. Looking to the future, we consider what best practice could look like and how the use of unclaimed bodies for anatomy teaching could be replaced. From an ethical point of view, countries that depend upon unclaimed bodies of dubious provenance are encouraged to use these reports and adopt strategies for developing successful donation programs. In many countries, the act of body donation has been guided by laws and ethical frameworks and has evolved alongside the needs for medical knowledge and for improved teaching of human anatomy. There will also be a future need for human bodies to ensure optimal pre- and post-graduate training and for use in biomedical research. Good body donation practice should be adopted wherever possible, moving away from the use of unclaimed bodies of dubious provenance and adopting strategies to favor the establishment of successful donation programs. © 2015 Wiley Periodicals, Inc.

  16. Enabling Future Robotic Missions with Multicore Processors

    NASA Technical Reports Server (NTRS)

    Powell, Wesley A.; Johnson, Michael A.; Wilmot, Jonathan; Some, Raphael; Gostelow, Kim P.; Reeves, Glenn; Doyle, Richard J.

    2011-01-01

    Recent commercial developments in multicore processors (e.g. Tilera, Clearspeed, HyperX) have provided an option for high performance embedded computing that rivals the performance attainable with FPGA-based reconfigurable computing architectures. Furthermore, these processors offer more straightforward and streamlined application development by allowing the use of conventional programming languages and software tools in lieu of hardware design languages such as VHDL and Verilog. With these advantages, multicore processors can significantly enhance the capabilities of future robotic space missions. This paper will discuss these benefits, along with onboard processing applications where multicore processing can offer advantages over existing or competing approaches. This paper will also discuss the key artchitecural features of current commercial multicore processors. In comparison to the current art, the features and advancements necessary for spaceflight multicore processors will be identified. These include power reduction, radiation hardening, inherent fault tolerance, and support for common spacecraft bus interfaces. Lastly, this paper will explore how multicore processors might evolve with advances in electronics technology and how avionics architectures might evolve once multicore processors are inserted into NASA robotic spacecraft.

  17. Application-driven strategies for efficient transfer of medical images over very high speed networks

    NASA Astrophysics Data System (ADS)

    Alsafadi, Yasser H.; McNeill, Kevin M.; Martinez, Ralph

    1993-09-01

    The American College of Radiology (ACR) and the National Electrical Manufacturing Association (NEMA) in 1982 formed the ACR-NEMA committee to develop a standard to enable equipment from different vendors to communicate and participate in a picture archiving and communications system (PACS). The standard focused mostly on interconnectivity issues and communication needs of PACS. It was patterned after the international standards organization open systems interconnection (ISO/OSI) reference model. Three versions of the standard appeared, evolving from simple point-to-point specification of connection between two medical devices to a complex standard of a network environment. However, fast changes in network software and hardware technologies makes it difficult for the standard to keep pace. This paper compares two versions of the ACR-NEMA standard and then describes a system that is used at the University of Arizona Intensive Care Unit. In this system, the application should specify the interface to network services and grade of service required. These provisions are suggested to make the application independent from evolving network technology and support true open systems.

  18. Raytheon's next generation compact inline cryocooler architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaefer, B. R.; Bellis, L.; Ellis, M. J.

    2014-01-29

    Since the 1970s, Raytheon has developed, built, tested and integrated high performance cryocoolers. Our versatile designs for single and multi-stage cryocoolers provide reliable operation for temperatures from 10 to 200 Kelvin with power levels ranging from 50 W to nearly 600 W. These advanced cryocoolers incorporate clearance seals, flexure suspensions, hermetic housings and dynamic balancing to provide long service life and reliable operation in all relevant environments. Today, sensors face a multitude of cryocooler integration challenges such as exported disturbance, efficiency, scalability, maturity, and cost. As a result, cryocooler selection is application dependent, oftentimes requiring extensive trade studies to determinemore » the most suitable architecture. To optimally meet the needs of next generation passive IR sensors, the Compact Inline Raytheon Stirling 1-Stage (CI-RS1), Compact Inline Raytheon Single Stage Pulse Tube (CI-RP1) and Compact Inline Raytheon Hybrid Stirling/Pulse Tube 2-Stage (CI-RSP2) cryocoolers are being developed to satisfy this suite of requirements. This lightweight, compact, efficient, low vibration cryocooler combines proven 1-stage (RS1 or RP1) and 2-stage (RSP2) cold-head architectures with an inventive set of warm-end mechanisms into a single cooler module, allowing the moving mechanisms for the compressor and the Stirling displacer to be consolidated onto a common axis and in a common working volume. The CI cryocooler is a significant departure from the current Stirling cryocoolers in which the compressor mechanisms are remote from the Stirling displacer mechanism. Placing all of the mechanisms in a single volume and on a single axis provides benefits in terms of package size (30% reduction), mass (30% reduction), thermodynamic efficiency (>20% improvement) and exported vibration performance (≤25 mN peak in all three orthogonal axes at frequencies from 1 to 500 Hz). The main benefit of axial symmetry is that proven balancing techniques and hardware can be utilized to null all motion along the common axis. Low vibration translates to better sensor performance resulting in simpler, more direct mechanical mounting configurations, eliminating the need for convoluted, expensive, massive, long lead damping hardware.« less

  19. Raytheon's next generation compact inline cryocooler architecture

    NASA Astrophysics Data System (ADS)

    Schaefer, B. R.; Bellis, L.; Ellis, M. J.; Conrad, T.

    2014-01-01

    Since the 1970s, Raytheon has developed, built, tested and integrated high performance cryocoolers. Our versatile designs for single and multi-stage cryocoolers provide reliable operation for temperatures from 10 to 200 Kelvin with power levels ranging from 50 W to nearly 600 W. These advanced cryocoolers incorporate clearance seals, flexure suspensions, hermetic housings and dynamic balancing to provide long service life and reliable operation in all relevant environments. Today, sensors face a multitude of cryocooler integration challenges such as exported disturbance, efficiency, scalability, maturity, and cost. As a result, cryocooler selection is application dependent, oftentimes requiring extensive trade studies to determine the most suitable architecture. To optimally meet the needs of next generation passive IR sensors, the Compact Inline Raytheon Stirling 1-Stage (CI-RS1), Compact Inline Raytheon Single Stage Pulse Tube (CI-RP1) and Compact Inline Raytheon Hybrid Stirling/Pulse Tube 2-Stage (CI-RSP2) cryocoolers are being developed to satisfy this suite of requirements. This lightweight, compact, efficient, low vibration cryocooler combines proven 1-stage (RS1 or RP1) and 2-stage (RSP2) cold-head architectures with an inventive set of warm-end mechanisms into a single cooler module, allowing the moving mechanisms for the compressor and the Stirling displacer to be consolidated onto a common axis and in a common working volume. The CI cryocooler is a significant departure from the current Stirling cryocoolers in which the compressor mechanisms are remote from the Stirling displacer mechanism. Placing all of the mechanisms in a single volume and on a single axis provides benefits in terms of package size (30% reduction), mass (30% reduction), thermodynamic efficiency (>20% improvement) and exported vibration performance (≤25 mN peak in all three orthogonal axes at frequencies from 1 to 500 Hz). The main benefit of axial symmetry is that proven balancing techniques and hardware can be utilized to null all motion along the common axis. Low vibration translates to better sensor performance resulting in simpler, more direct mechanical mounting configurations, eliminating the need for convoluted, expensive, massive, long lead damping hardware.

  20. Review of hardware-in-the-loop simulation and its prospects in the automotive area

    NASA Astrophysics Data System (ADS)

    Fathy, Hosam K.; Filipi, Zoran S.; Hagena, Jonathan; Stein, Jeffrey L.

    2006-05-01

    Hardware-in-the-loop (HIL) simulation is rapidly evolving from a control prototyping tool to a system modeling, simulation, and synthesis paradigm synergistically combining many advantages of both physical and virtual prototyping. This paper provides a brief overview of the key enablers and numerous applications of HIL simulation, focusing on its metamorphosis from a control validation tool into a system development paradigm. It then describes a state-of-the art engine-in-the-loop (EIL) simulation facility that highlights the use of HIL simulation for the system-level experimental evaluation of powertrain interactions and development of strategies for clean and efficient propulsion. The facility comprises a real diesel engine coupled to accurate real-time driver, driveline, and vehicle models through a highly responsive dynamometer. This enables the verification of both performance and fuel economy predictions of different conventional and hybrid powertrains. Furthermore, the facility can both replicate the highly dynamic interactions occurring within a real powertrain and measure their influence on transient emissions and visual signature through state-of-the-art instruments. The viability of this facility for integrated powertrain system development is demonstrated through a case study exploring the development of advanced High Mobility Multipurpose Wheeled Vehicle (HMMWV) powertrains.

  1. Lessons learned about spaceflight and cell biology experiments

    NASA Technical Reports Server (NTRS)

    Hughes-Fulford, Millie

    2004-01-01

    Conducting cell biology experiments in microgravity can be among the most technically challenging events in a biologist's life. Conflicting events of spaceflight include waiting to get manifested, delays in manifest schedules, training astronauts to not shake your cultures and to add reagents slowly, as shaking or quick injection can activate signaling cascades and give you erroneous results. It is important to select good hardware that is reliable. Possible conflicting environments in flight include g-force and vibration of launch, exposure of cells to microgravity for extended periods until hardware is turned on, changes in cabin gases and cosmic radiation. One should have an on-board 1-g control centrifuge in order to eliminate environmental differences. Other obstacles include getting your funding in a timely manner (it is not uncommon for two to three years to pass between notification of grant approval for funding and actually getting funded). That said, it is important to note that microgravity research is worthwhile since all terrestrial life evolved in a gravity field and secrets of biological function may only be answered by removing the constant of gravity. Finally, spaceflight experiments are rewarding and worth your effort and patience.

  2. Open control/display system for a telerobotics work station

    NASA Technical Reports Server (NTRS)

    Keslowitz, Saul

    1987-01-01

    A working Advanced Space Cockpit was developed that integrated advanced control and display devices into a state-of-the-art multimicroprocessor hardware configuration, using window graphics and running under an object-oriented, multitasking real-time operating system environment. This Open Control/Display System supports the idea that the operator should be able to interactively monitor, select, control, and display information about many payloads aboard the Space Station using sets of I/O devices with a single, software-reconfigurable workstation. This is done while maintaining system consistency, yet the system is completely open to accept new additions and advances in hardware and software. The Advanced Space Cockpit, linked to Grumman's Hybrid Computing Facility and Large Amplitude Space Simulator (LASS), was used to test the Open Control/Display System via full-scale simulation of the following tasks: telerobotic truss assembly, RCS and thermal bus servicing, CMG changeout, RMS constrained motion and space constructible radiator assembly, HPA coordinated control, and OMV docking and tumbling satellite retrieval. The proposed man-machine interface standard discussed has evolved through many iterations of the tasks, and is based on feedback from NASA and Air Force personnel who performed those tasks in the LASS.

  3. Architecture for hospital information integration

    NASA Astrophysics Data System (ADS)

    Chimiak, William J.; Janariz, Daniel L.; Martinez, Ralph

    1999-07-01

    The ongoing integration of hospital information systems (HIS) continues. Data storage systems, data networks and computers improve, data bases grow and health-care applications increase. Some computer operating systems continue to evolve and some fade. Health care delivery now depends on this computer-assisted environment. The result is the critical harmonization of the various hospital information systems becomes increasingly difficult. The purpose of this paper is to present an architecture for HIS integration that is computer-language-neutral and computer- hardware-neutral for the informatics applications. The proposed architecture builds upon the work done at the University of Arizona on middleware, the work of the National Electrical Manufacturers Association, and the American College of Radiology. It is a fresh approach to allowing applications engineers to access medical data easily and thus concentrates on the application techniques in which they are expert without struggling with medical information syntaxes. The HIS can be modeled using a hierarchy of information sub-systems thus facilitating its understanding. The architecture includes the resulting information model along with a strict but intuitive application programming interface, managed by CORBA. The CORBA requirement facilitates interoperability. It should also reduce software and hardware development times.

  4. Finding a roadmap to achieve large neuromorphic hardware systems

    PubMed Central

    Hasler, Jennifer; Marr, Bo

    2013-01-01

    Neuromorphic systems are gaining increasing importance in an era where CMOS digital computing techniques are reaching physical limits. These silicon systems mimic extremely energy efficient neural computing structures, potentially both for solving engineering applications as well as understanding neural computation. Toward this end, the authors provide a glimpse at what the technology evolution roadmap looks like for these systems so that Neuromorphic engineers may gain the same benefit of anticipation and foresight that IC designers gained from Moore's law many years ago. Scaling of energy efficiency, performance, and size will be discussed as well as how the implementation and application space of Neuromorphic systems are expected to evolve over time. PMID:24058330

  5. The origins of informatics.

    PubMed Central

    Collen, M F

    1994-01-01

    This article summarizes the origins of informatics, which is based on the science, engineering, and technology of computer hardware, software, and communications. In just four decades, from the 1950s to the 1990s, computer technology has progressed from slow, first-generation vacuum tubes, through the invention of the transistor and its incorporation into microprocessor chips, and ultimately, to fast, fourth-generation very-large-scale-integrated silicon chips. Programming has undergone a parallel transformation, from cumbersome, first-generation, machine languages to efficient, fourth-generation application-oriented languages. Communication has evolved from simple copper wires to complex fiberoptic cables in computer-linked networks. The digital computer has profound implications for the development and practice of clinical medicine. PMID:7719803

  6. Development of the Ultra-Light Stretched Lens Array

    NASA Technical Reports Server (NTRS)

    O'Neill, M. J.; McDanal, A. J.; George, P. J.; Piszczor, M. F.; Edwards, D. L.; Botke, M. M.; Jaster, P. A.; Brandhorst, H. W.; Eskenazi, M.I.; Munafo, Paul M. (Technical Monitor)

    2002-01-01

    At the last IEEE (Institute of Electrical and Electronics Engineers) PVSC (Photovoltaic Specialists Conference), the new stretched lens array (SLA) concept was introduced. Since that conference, the SLA team has made significant advances in the SLA technology, including component level improvements, array level optimization, space environment exposure testing, and prototype hardware fabrication and evaluation. This paper will describe the evolved version of the SLA, highlighting the improvements in the lens, solar cell, rigid panel structure, and complete solar array wing. The near term SLA will provide outstanding wing level performance: greater than 180 W/kg specific power, greater than 300 W/sq m power density, greater than 300 V operational voltage, and excellent durability in the space environment.

  7. Control - Demands mushroom as station grows

    NASA Technical Reports Server (NTRS)

    Szirmay, S. Z.; Blair, J.

    1983-01-01

    The NASA space station, which is presently in the planning stage, is to be composed of both rigid and nonrigid modules, rotating elements, and flexible appendages subjected to environmental disturbances from the earth's atmospheric gravity gradient, and magnetic field, as well as solar radiation and self-generated disturbances. Control functions, which will originally include attitude control, docking and berthing control, and system monitoring and management, will with evolving mission objectives come to encompass such control functions as articulation control, autonomous navigation, space traffic control, and large space structure control. Attention is given to the advancements in modular, distributed, and adaptive control methods, as well as system identification and hardware fault tolerance techniques, which will be required.

  8. I-deas TMG to NX Space Systems Thermal Model Conversion and Computational Performance Comparison

    NASA Technical Reports Server (NTRS)

    Somawardhana, Ruwan

    2011-01-01

    CAD/CAE packages change on a continuous basis as the power of the tools increase to meet demands. End -users must adapt to new products as they come to market and replace legacy packages. CAE modeling has continued to evolve and is constantly becoming more detailed and complex. Though this comes at the cost of increased computing requirements Parallel processing coupled with appropriate hardware can minimize computation time. Users of Maya Thermal Model Generator (TMG) are faced with transitioning from NX I -deas to NX Space Systems Thermal (SST). It is important to understand what differences there are when changing software packages We are looking for consistency in results.

  9. Engineering hurdles in contact and intraocular lens lathe design: the view ahead

    NASA Astrophysics Data System (ADS)

    Bradley, Norman D.; Keller, John R.; Ball, Gary A.

    1994-05-01

    Current trends in and intraocular lens design suggest ever- increasing demand for aspheric lens geometries - multisurface and/or toric surfaces - in a variety of new materials. As computer numeric controls (CNC) lathes and mills continue to evolve with he ophthalmic market, engineering hurdles present themselves to designers: Can hardware based upon single-point diamond turning accommodate the demands of software-driven designs? What are the limits of CNC resolution and repeatability in high-throughput production? What are the controlling factors in lathed, polish-free surface production? Emerging technologies in the lathed biomedical optics field are discussed along with their limitations, including refined diamond tooling, vibrational control, automation, and advanced motion control systems.

  10. General Principles for Brain Design

    NASA Astrophysics Data System (ADS)

    Josephson, Brian D.

    2006-06-01

    The task of understanding how the brain works has met with only limited success since important design concepts are not as yet incorporated in the analysis. Relevant concepts can be uncovered by studying the powerful methodologies that have evolved in the context of computer programming, raising the question of how the concepts involved there can be realised in neural hardware. Insights can be gained in regard to such issues through the study of the role played by models and representation. These insights lead on to an appreciation of the mechanisms underlying subtle capacities such as those concerned with the use of language. A precise, essentially mathematical account of such capacities is in prospect for the future.

  11. Experience with custom processors in space flight applications

    NASA Technical Reports Server (NTRS)

    Fraeman, M. E.; Hayes, J. R.; Lohr, D. A.; Ballard, B. W.; Williams, R. L.; Henshaw, R. M.

    1991-01-01

    The Applied Physics Laboratory (APL) has developed a magnetometer instrument for a swedish satellite named Freja with launch scheduled for August 1992 on a Chinese Long March rocket. The magnetometer controller utilized a custom microprocessor designed at APL with the Genesil silicon compiler. The processor evolved from our experience with an older bit-slice design and two prior single chip efforts. The architecture of our microprocessor greatly lowered software development costs because it was optimized to provide an interactive and extensible programming environment hosted by the target hardware. Radiation tolerance of the microprocessor was also tested and was adequate for Freja's mission -- 20 kRad(Si) total dose and very infrequent latch-up and single event upset events.

  12. Interpreting forest and grassland biome productivity utilizing nested scales of image resolution and biogeographical analysis

    NASA Technical Reports Server (NTRS)

    Iverson, L. R.; Cook, E. A.; Graham, R. L.; Olson, J. S.; Frank, T.; Ke, Y.; Treworgy, C.; Risser, P. G.

    1986-01-01

    Several hardware, software, and data collection problems encountered were conquered. The Geographic Information System (GIS) data from other systems were converted to ERDAS format for incorporation with the image data. Statistical analysis of the relationship between spectral values and productivity is being pursued. Several project sites, including Jackson, Pope, Boulder, Smokies, and Huntington Forest are evolving as the most intensively studied areas, primarily due to availability of data and time. Progress with data acquisition and quality checking, more details on experimental sites, and brief summarizations of research results and future plans are discussed. Material on personnel, collaborators, facilities, site background, and meetings and publications of the investigators are included.

  13. ISS ECLSS Technology Evolution for Exploration

    NASA Technical Reports Server (NTRS)

    Carrasquillo, Robyn

    2005-01-01

    The baseline environmental control and life support systems (ECLSS) currently deployed on the International Space Station (ISS) and the regenerative oxygen generation and water early 1990's. While they are generally meeting, or exceeding requirements for supporting the ISS crew, lessons learned from hardware development and on orbit experience, together with advances in technology state of the art, and th&e unique requirements for future manned exploration missions prompt consideration of the next steps to be taken to evolve these technologies to improve robustness and reliability, enhance performance, and reduce resource requirements such as power and logistics upmass This paper discusses the current state of ISS ECLSS technology and identifies possible areas for evolutionary enhancement or improvement.

  14. Interstellar Grains: 50 Years on

    NASA Astrophysics Data System (ADS)

    Wickramasinghe, N. C.

    Our understanding of the nature of interstellar grains has evolved considerably over the past half century with the present author and Fred Hoyle being intimately involved at several key stages of progress. The currently fashionable graphite-silicate-organic grain model has all its essential aspects unequivocally traceable to original peer-reviewed publications by the author and/or Fred Hoyle. The prevailing reluctance to accept these clear-cut priorities may be linked to our further work that argued for interstellar grains and organics to have a biological provenance -- a position perceived as heretical. The biological model, however, continues to provide a powerful unifying hypothesis for a vast amount of otherwise disconnected and disparate astronomical data.

  15. Brain-computer interface after nervous system injury.

    PubMed

    Burns, Alexis; Adeli, Hojjat; Buford, John A

    2014-12-01

    Brain-computer interface (BCI) has proven to be a useful tool for providing alternative communication and mobility to patients suffering from nervous system injury. BCI has been and will continue to be implemented into rehabilitation practices for more interactive and speedy neurological recovery. The most exciting BCI technology is evolving to provide therapeutic benefits by inducing cortical reorganization via neuronal plasticity. This article presents a state-of-the-art review of BCI technology used after nervous system injuries, specifically: amyotrophic lateral sclerosis, Parkinson's disease, spinal cord injury, stroke, and disorders of consciousness. Also presented is transcending, innovative research involving new treatment of neurological disorders. © The Author(s) 2014.

  16. Using dogs for tiger conservation and research.

    PubMed

    Kerley, Linda L

    2010-12-01

    This paper is a review of the history, development and efficacy of using dogs in wildlife studies and considers the use of dogs in the research and conservation of wild tigers (Panthera tigris Linnaeus, 1758). Using scat detection dogs, scent-matching dogs, law enforcement detection dogs and protection dogs are proven methods that can be effectively used on tigers. These methods all take advantage of the dog's extremely evolved sense of smell that allows them to detect animals or animal byproducts (often the focus of tiger studies). Dogs can be trained to communicate this information to their handlers. © 2010 ISZS, Blackwell Publishing and IOZ/CAS.

  17. Applied evolutionary theories for engineering of secondary metabolic pathways.

    PubMed

    Bachmann, Brian O

    2016-12-01

    An expanded definition of 'secondary metabolism' is emerging. Once the exclusive provenance of naturally occurring organisms, evolved over geological time scales, secondary metabolism increasingly encompasses molecules generated via human engineered biocatalysts and biosynthetic pathways. Many of the tools and strategies for enzyme and pathway engineering can find origins in evolutionary theories. This perspective presents an overview of selected proposed evolutionary strategies in the context of engineering secondary metabolism. In addition to the wealth of biocatalysts provided via secondary metabolic pathways, improving the understanding of biosynthetic pathway evolution will provide rich resources for methods to adapt to applied laboratory evolution. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Demonstration of a Pyrotechnic Bolt-Retractor System

    NASA Technical Reports Server (NTRS)

    Johnston, Nick; Ahmed, Rafiq; Garrison, Craig; Gaines, Joseph; Waggoner, Jason

    2004-01-01

    A paper describes a demonstration of the X-38 bolt-retractor system (BRS) on a spacecraft-simulating apparatus, called the Large Mobility Base, in NASA's Flight Robotics Laboratory (FRL). The BRS design was proven safe by testing in NASA's Pyrotechnic Shock Facility (PSF) before being demonstrated in the FRL. The paper describes the BRS, FRL, PSF, and interface hardware. Information on the bolt-retraction time and spacecraft-simulator acceleration, and an analysis of forces, are presented. The purpose of the demonstration was to show the capability of the FRL for testing of the use of pyrotechnics to separate stages of a spacecraft. Although a formal test was not performed because of schedule and budget constraints, the data in the report show that the BRS is a successful design concept and the FRL is suitable for future separation tests.

  19. Antenna Technology Shuttle Experiment (ATSE)

    NASA Technical Reports Server (NTRS)

    Freeland, R. E.; Mettler, E.; Miller, L. J.; Rahmet-Samii, Y.; Weber, W. J., III

    1987-01-01

    Numerous space applications of the future will require mesh deployable antennas of 15 m in diameter or greater for frequencies up to 20 GHz. These applications include mobile communications satellites, orbiting very long baseline interferometry (VLBI) astrophysics missions, and Earth remote sensing missions. A Lockheed wrap rip antennas was used as the test article. The experiments covered a broad range of structural, control, and RF discipline objectives, which is fulfilled in total, would greatly reduce the risk of employing these antenna systems in future space applications. It was concluded that a flight experiment of a relatively large mesh deployable reflector is achievable with no major technological or cost drivers. The test articles and the instrumentation are all within the state of the art and in most cases rely on proven flight hardware. Every effort was made to design the experiments for low cost.

  20. Update on CMH-17 Volume 5: Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    David, Kaia; Pierce, Jennifer; Kiser, James; Keith, William P.; Wilson, Gregory S.

    2015-01-01

    CMC components are projected to enter service in commercial aircraft in 2016. A wide range of issues must be addressed prior to certification of this hardware. The Composite Materials Handbook-17, Volume 5 on ceramic matrix composites is being revised to support FAA certification of CMCs for hot structure and other elevated temperature applications. The handbook supports the development and use of CMCs through publishing and maintaining proven, reliable engineering information and standards that have been thoroughly reviewed. Volume 5 will contain detailed sections describing CMC materials processing, design analysis guidelines, testing procedures, and data analysis and acceptance. A review of the status of and plans for two of these areas, which are being addressed by the M and P Working Group and the Testing Working Group, will be presented along with a timeline for the preparation of CMH-17, Volume 5.

  1. Temperature-time issues in bioburden control for planetary protection

    NASA Astrophysics Data System (ADS)

    Clark, Benton C.

    2004-01-01

    Heat energy, administered in the form of an elevated temperature heat soak over a specific interval of time, is a well-known method for inactivating organisms. Sterilization protocols, from commercial pasteurization to laboratory autoclaving, specify both temperature and time, as well as water activity, for treatments to achieve either acceptable reduction of bioburden or complete sterilization. In practical applications of planetary protection, whether to reduce spore load in forward or roundtrip contamination, or to exterminate putative organisms in returned samples from bodies suspected of possible life, avoidance of expensive or potentially damaging treatments of hardware (or samples) could be accomplished if reciprocal relationships between time duration and soak temperature could be established. Conservative rules can be developed from consideration of empirical test data, derived relationships, current standards and various theoretical or proven mechanisms for thermal damage to biological systems.

  2. Hardware-in-the-Loop emulator for a hydrokinetic turbine

    NASA Astrophysics Data System (ADS)

    Rat, C. L.; Prostean, O.; Filip, I.

    2018-01-01

    Hydroelectric power has proven to be an efficient and reliable form of renewable energy, but its impact on the environment has long been a source of concern. Hydrokinetic turbines are an emerging class of renewable energy technology designed for deployment in small rivers and streams with minimal environmental impact on the local ecosystem. Hydrokinetic technology represents a truly clean source of energy, having the potential to become a highly efficient method of harvesting renewable energy. However, in order to achieve this goal, extensive research is necessary. This paper presents a Hardware-in-the-Loop emulator for a run-of-the-river type hydrokinetic turbine. The HIL system uses an ABB ACS800 drive to control an induction machine as a significant means of replicating the behavior of the real turbine. The induction machine is coupled to a permanent magnet synchronous generator and the corresponding load. The ACS800 drive is controlled through the software system, which comprises of the hydrokinetic turbine real-time simulation through mathematical modeling in the LabVIEW programming environment running on a NI CompactRIO (cRIO) platform. The advantages of this method are that it can provide a means for testing many control configurations without requiring the presence of the real turbine. This paper contains the basic principles of a hydrokinetic turbine, particularly the run-of-the-river configurations along with the experimental results obtained from the HIL system.

  3. High Resolution Imaging with MUSTANG-2 on the GBT

    NASA Astrophysics Data System (ADS)

    Stanchfield, Sara; Ade, Peter; Aguirre, James; Brevik, Justus A.; Cho, Hsiao-Mei; Datta, Rahul; Devlin, Mark; Dicker, Simon R.; Dober, Bradley; Duff, Shannon M.; Egan, Dennis; Ford, Pam; Hilton, Gene; Hubmayr, Johannes; Irwin, Kent; Knowles, Kenda; Marganian, Paul; Mason, Brian Scott; Mates, John A. B.; McMahon, Jeff; Mello, Melinda; Mroczkowski, Tony; Romero, Charles; Sievers, Jonathon; Tucker, Carole; Vale, Leila R.; Vissers, Michael; White, Steven; Whitehead, Mark; Ullom, Joel; Young, Alexander

    2018-01-01

    We present early science results from MUSTANG-2, a 90 GHz feedhorn-coupled, microwave SQUID-multiplexed TES bolometer array operating on the Robert C. Byrd Green Bank Telescope (GBT). The feedhorn and waveguide-probe-coupled detector technology is a mature technology, which has been used on instruments such as the South Pole Telescope, the Atacama Cosmology Telescope, and the Atacama B-mode Search telescope. The microwave SQUID multiplexer-based readout system developed for MUSTANG-2 currently reads out 66 detectors with a single coaxial cable and will eventually allow thousands of detectors to be multiplexed. This microwave SQUID multiplexer combines the proven abilities of millimeter wave TES detectors with the multiplexing capabilities of KIDs with no degradation in noise performance of the detectors. Each multiplexing device is read out using warm electronics consisting of a commercially available ROACH board, a DAC/ADC card, and an Intermediate Frequency mixer circuit. The hardware was originally developed by the Collaboration for Astronomy Signal Processing and Electronic Research (CASPER) group, whose primary goal is to develop scalable FPGA-based hardware with the flexibility to be used in a wide range of radio signal processing applications. MUSTANG-2 is the first on-sky instrument to use microwave SQUID multiplexing and is available as a shared-risk/PI instrument on the GBT. In MUSTANG-2’s first season 7 separate proposals were awarded a total of 230 hours of telescope time.

  4. Born semantic: linking data from sensors to users and balancing hardware limitations with data standards

    NASA Astrophysics Data System (ADS)

    Buck, Justin; Leadbetter, Adam

    2015-04-01

    New users for the growing volume of ocean data for purposes such as 'big data' data products and operational data assimilation/ingestion require data to be readily ingestible. This can be achieved via the application of World Wide Web Consortium (W3C) Linked Data and Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) standards to data management. As part of several Horizons 2020 European projects (SenseOCEAN, ODIP, AtlantOS) the British Oceanographic Data Centre (BODC) are working on combining existing data centre architecture and SWE software such as Sensor Observation Services with a Linked Data front end. The standards to enable data delivery are proven and well documented1,2 There are practical difficulties when SWE standards are applied to real time data because of internal hardware bandwidth restrictions and a requirement to constrain data transmission costs. A pragmatic approach is proposed where sensor metadata and data output in OGC standards are implemented "shore-side" with sensors and instruments transmitting unique resolvable web linkages to persistent OGC SensorML records published at the BODC. References: 1. World Wide Web Consortium. (2013). Linked Data. Available: http://www.w3.org/standards/semanticweb/data. Last accessed 8th October 2014. 2. Open Geospatial Consortium. (2014). Sensor Web Enablement (SWE). Available: http://www.opengeospatial.org/ogc/markets-technologies/swe. Last accessed 8th October 2014.

  5. Robotic and Human-Tended Collaborative Drilling Automation for Subsurface Exploration

    NASA Technical Reports Server (NTRS)

    Glass, Brian; Cannon, Howard; Stoker, Carol; Davis, Kiel

    2005-01-01

    Future in-situ lunar/martian resource utilization and characterization, as well as the scientific search for life on Mars, will require access to the subsurface and hence drilling. Drilling on Earth is hard - an art form more than an engineering discipline. Human operators listen and feel drill string vibrations coming from kilometers underground. Abundant mass and energy make it possible for terrestrial drilling to employ brute-force approaches to failure recovery and system performance issues. Space drilling will require intelligent and autonomous systems for robotic exploration and to support human exploration. Eventual in-situ resource utilization will require deep drilling with probable human-tended operation of large-bore drills, but initial lunar subsurface exploration and near-term ISRU will be accomplished with lightweight, rover-deployable or standalone drills capable of penetrating a few tens of meters in depth. These lightweight exploration drills have a direct counterpart in terrestrial prospecting and ore-body location, and will be designed to operate either human-tended or automated. NASA and industry now are acquiring experience in developing and building low-mass automated planetary prototype drills to design and build a pre-flight lunar prototype targeted for 2011-12 flight opportunities. A successful system will include development of drilling hardware, and automated control software to operate it safely and effectively. This includes control of the drilling hardware, state estimation of both the hardware and the lithography being drilled and state of the hole, and potentially planning and scheduling software suitable for uncertain situations such as drilling. Given that Humans on the Moon or Mars are unlikely to be able to spend protracted EVA periods at a drill site, both human-tended and robotic access to planetary subsurfaces will require some degree of standalone, autonomous drilling capability. Human-robotic coordination will be important, either between a robotic drill and humans on Earth, or a human-tended drill and its visiting crew. The Mars Analog Rio Tinto Experiment (MARTE) is a current project that studies and simulates the remote science operations between an automated drill in Spain and a distant, distributed human science team. The Drilling Automation for Mars Exploration (DAME) project, by contrast: is developing and testing standalone automation at a lunar/martian impact crater analog site in Arctic Canada. The drill hardware in both projects is a hardened, evolved version of the Advanced Deep Drill (ADD) developed by Honeybee Robotics for the Mars Subsurface Program. The current ADD is capable of 20m, and the DAME project is developing diagnostic and executive software for hands-off surface operations of the evolved version of this drill. The current drill automation architecture being developed by NASA and tested in 2004-06 at analog sites in the Arctic and Spain will add downhole diagnosis of different strata, bit wear detection, and dynamic replanning capabilities when unexpected failures or drilling conditions are discovered in conjunction with simulated mission operations and remote science planning. The most important determinant of future 1unar and martian drilling automation and staffing requirements will be the actual performance of automated prototype drilling hardware systems in field trials in simulated mission operations. It is difficult to accurately predict the level of automation and human interaction that will be needed for a lunar-deployed drill without first having extensive experience with the robotic control of prototype drill systems under realistic analog field conditions. Drill-specific failure modes and software design flaws will become most apparent at this stage. DAME will develop and test drill automation software and hardware under stressful operating conditions during several planned field campaigns. Initial results from summer 2004 tests show seven identifi distinct failure modes of the drill: cuttings-removal issues with low-power drilling into permafrost, and successful steps at executive control and initial automation.

  6. Space Launch System Spacecraft and Payload Elements: Making Progress Toward First Launch

    NASA Technical Reports Server (NTRS)

    Schorr, Andrew A.; Creech, Stephen D.; Ogles, Michael; Hitt, David

    2016-01-01

    Significant and substantial progress continues to be accomplished in the design, development, and testing of the Space Launch System (SLS), the most powerful human-rated launch vehicle the United States has ever undertaken. Designed to support human missions into deep space, SLS is one of three programs being managed by the National Aeronautics and Space Administration's (NASA's) Exploration Systems Development directorate. The Orion spacecraft program is developing a new crew vehicle that will support human missions beyond low Earth orbit, and the Ground Systems Development and Operations (GSDO) program is transforming Kennedy Space Center (KSC) into next-generation spaceport capable of supporting not only SLS but also multiple commercial users. Together, these systems will support human exploration missions into the proving ground of cislunar space and ultimately to Mars. SLS will deliver a near-term heavy-lift capability for the nation with its 70 metric ton Block 1 configuration, and will then evolve to an ultimate capability of 130 metric tons. The SLS program marked a major milestone with the successful completion of the Critical Design Review in which detailed designs were reviewed and subsequently approved for proceeding with full-scale production. This marks the first time an exploration class vehicle has passed that major milestone since the Saturn V vehicle launched astronauts in the 1960s during the Apollo program. Each element of the vehicle now has flight hardware in production in support of the initial flight of the SLS - Exploration Mission-1 (EM-1), an uncrewed mission to orbit the moon and return, and progress in on track to meet the initial targeted launch date in 2018. In Utah and Mississippi, booster and engine testing are verifying upgrades made to proven shuttle hardware. At Michoud Assembly Facility (MAF) in Louisiana, the world's largest spacecraft welding tool is producing tanks for the SLS core stage. This paper will particularly focus on work taking place at Marshall Space Flight Center (MSFC) and United Launch Alliance (ULA) in Alabama, where upper stage and adapter elements of the vehicle are being constructed and tested. Providing the Orion crew capsule/launch vehicle interface and in-space propulsion via a cryogenic upper stage, the Spacecraft/Payload Integration and Evolution (SPIE) Element serves a key role in achieving SLS goals and objectives. The SPIE element marked a major milestone in 2014 with the first flight of original SLS hardware, the Orion Stage Adapter (OSA) which was used on Exploration Flight Test-1 with a design that will be used again on EM-1. Construction is already underway on the EM-1 Interim Cryogenic Propulsion Stage (ICPS), an in-space stage derived from the Delta Cryogenic Second Stage. Manufacture of the Orion Stage Adapter and the Launch Vehicle Stage Adapter is set to begin at the Friction Stir Facility located at MSFC while structural test articles are either completed (OSA) or nearing completion (Launch Vehicle Stage Adapter). An overview is provided of the launch vehicle capabilities, with a specific focus on SPIE Element qualification/testing progress, as well as efforts to provide access to deep space regions currently not available to the science community through a secondary payload capability utilizing CubeSat-class satellites.

  7. Evolving self-assembly in autonomous homogeneous robots: experiments with two physical robots.

    PubMed

    Ampatzis, Christos; Tuci, Elio; Trianni, Vito; Christensen, Anders Lyhne; Dorigo, Marco

    2009-01-01

    This research work illustrates an approach to the design of controllers for self-assembling robots in which the self-assembly is initiated and regulated by perceptual cues that are brought forth by the physical robots through their dynamical interactions. More specifically, we present a homogeneous control system that can achieve assembly between two modules (two fully autonomous robots) of a mobile self-reconfigurable system without a priori introduced behavioral or morphological heterogeneities. The controllers are dynamic neural networks evolved in simulation that directly control all the actuators of the two robots. The neurocontrollers cause the dynamic specialization of the robots by allocating roles between them based solely on their interaction. We show that the best evolved controller proves to be successful when tested on a real hardware platform, the swarm-bot. The performance achieved is similar to the one achieved by existing modular or behavior-based approaches, also due to the effect of an emergent recovery mechanism that was neither explicitly rewarded by the fitness function, nor observed during the evolutionary simulation. Our results suggest that direct access to the orientations or intentions of the other agents is not a necessary condition for robot coordination: Our robots coordinate without direct or explicit communication, contrary to what is assumed by most research works in collective robotics. This work also contributes to strengthening the evidence that evolutionary robotics is a design methodology that can tackle real-world tasks demanding fine sensory-motor coordination.

  8. Octree-based, GPU implementation of a continuous cellular automaton for the simulation of complex, evolving surfaces

    NASA Astrophysics Data System (ADS)

    Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Gadea, R.; Sato, K.

    2011-03-01

    Presently, dynamic surface-based models are required to contain increasingly larger numbers of points and to propagate them over longer time periods. For large numbers of surface points, the octree data structure can be used as a balance between low memory occupation and relatively rapid access to the stored data. For evolution rules that depend on neighborhood states, extended simulation periods can be obtained by using simplified atomistic propagation models, such as the Cellular Automata (CA). This method, however, has an intrinsic parallel updating nature and the corresponding simulations are highly inefficient when performed on classical Central Processing Units (CPUs), which are designed for the sequential execution of tasks. In this paper, a series of guidelines is presented for the efficient adaptation of octree-based, CA simulations of complex, evolving surfaces into massively parallel computing hardware. A Graphics Processing Unit (GPU) is used as a cost-efficient example of the parallel architectures. For the actual simulations, we consider the surface propagation during anisotropic wet chemical etching of silicon as a computationally challenging process with a wide-spread use in microengineering applications. A continuous CA model that is intrinsically parallel in nature is used for the time evolution. Our study strongly indicates that parallel computations of dynamically evolving surfaces simulated using CA methods are significantly benefited by the incorporation of octrees as support data structures, substantially decreasing the overall computational time and memory usage.

  9. Evolved α-factor prepro-leaders for directed laccase evolution in Saccharomyces cerevisiae.

    PubMed

    Mateljak, Ivan; Tron, Thierry; Alcalde, Miguel

    2017-11-01

    Although the functional expression of fungal laccases in Saccharomyces cerevisiae has proven to be complicated, the replacement of signal peptides appears to be a suitable approach to enhance secretion in directed evolution experiments. In this study, twelve constructs were prepared by fusing native and evolved α-factor prepro-leaders from S. cerevisiae to four different laccases with low-, medium- and high-redox potential (PM1L from basidiomycete PM1; PcL from Pycnoporus cinnabarinus; TspC30L from Trametes sp. strain C30; and MtL from Myceliophthora thermophila). Microcultures of the prepro-leader:laccase fusions were grown in selective expression medium that used galactose as both the sole carbon source and as the inducer of expression so that the secretion and activity were assessed with low- and high-redox potential mediators in a high-throughput screening context. With total activity improvements as high as sevenfold over those obtained with the native α-factor prepro-leader, the evolved prepro-leader from PcL (α PcL ) most strongly enhanced secretion of the high- and medium-redox potential laccases PcL, PM1L and TspC30L in the microtiter format with an expression pattern driven by prepro-leaders in the order α PcL  > α PM 1L  ~ α native . By contrast, the pattern of the low-redox potential MtL was α native  > α PcL  > α PM 1L . When produced in flask with rich medium, the evolved prepro-leaders outperformed the α native signal peptide irrespective of the laccase attached, enhancing secretion over 50-fold. Together, these results highlight the importance of using evolved α-factor prepro-leaders for functional expression of fungal laccases in directed evolution campaigns. © 2017 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.

  10. Computed Tomography of the Head and Neck Region for Tumor Staging-Comparison of Dual-Source, Dual-Energy and Low-Kilovolt, Single-Energy Acquisitions.

    PubMed

    May, Matthias Stefan; Bruegel, Joscha; Brand, Michael; Wiesmueller, Marco; Krauss, Bernhard; Allmendinger, Thomas; Uder, Michael; Wuest, Wolfgang

    2017-09-01

    The aim of this study was to intra-individually compare the image quality obtained by dual-source, dual-energy (DSDE) computed tomography (CT) examinations and different virtual monoenergetic reconstructions to a low single-energy (SE) scan. Third-generation DSDE-CT was performed in 49 patients with histologically proven malignant disease of the head and neck region. Weighted average images (WAIs) and virtual monoenergetic images (VMIs) for low (40 and 60 keV) and high (120 and 190 keV) energies were reconstructed. A second scan aligned to the jaw, covering the oral cavity, was performed for every patient to reduce artifacts caused by dental hardware using a SE-CT protocol with 70-kV tube voltages and matching radiation dose settings. Objective image quality was evaluated by calculating contrast-to-noise ratios. Subjective image quality was evaluated by experienced radiologists. Highest contrast-to-noise ratios for vessel and tumor attenuation were obtained in 40-keV VMI (all P < 0.05). Comparable objective results were found in 60-keV VMI, WAI, and the 70-kV SE examinations. Overall subjective image quality was also highest for 40-keV, but differences to 60-keV VMI, WAI, and 70-kV SE were nonsignificant (all P > 0.05). High kiloelectron volt VMIs reduce metal artifacts with only limited diagnostic impact because of insufficiency in case of severe dental hardware. CTDIvol did not differ significantly between both examination protocols (DSDE: 18.6 mGy; 70-kV SE: 19.4 mGy; P = 0.10). High overall image quality for tumor delineation in head and neck imaging were obtained with 40-keV VMI. However, 70-kV SE examinations are an alternative and modified projections aligned to the jaw are recommended in case of severe artifacts caused by dental hardware.

  11. A New Heavy-Lift Capability for Space Exploration: NASA's Ares V Cargo Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Sumrall, John P.; McArthur, J. Craig

    2007-01-01

    The National Aeronautics and Space Administration (NASA) is developing new launch systems and preparing to retire the Space Shuttle by 2010, as directed in the United States (U.S.) Vision for Space Exploration. The Ares I Crew Launch Vehicle (CLV) and the Ares V heavy-lift Cargo Launch Vehicle (CaLV) systems will build upon proven, reliable hardware derived from the Apollo-Saturn and Space Shuttle programs to deliver safe, reliable, affordable space transportation solutions. This approach leverages existing aerospace talent and a unique infrastructure, as well as legacy knowledge gained from nearly 50 years' experience developing space hardware. Early next decade, the Ares I will launch the new Orion Crew Exploration Vehicle (CEV) to the International Space Station (ISS) or to low-Earth orbit for trips to the Moon and, ultimately, Mars. Late next decade, the Ares V's Earth Departure Stage will carry larger payloads such as the lunar lander into orbit, and the Crew Exploration Vehicle will dock with it for missions to the Moon, where astronauts will explore new territories and conduct science and technology experiments. Both Ares I and Ares V are being designed to support longer future trips to Mars. The Exploration Launch Projects Office is designing, developing, testing, and evaluating both launch vehicle systems in partnership with other NASA Centers, Government agencies, and industry contractors. This paper provides top-level information regarding the genesis and evolution of the baseline configuration for the Ares V heavy-lift system. It also discusses riskbased, management strategies, such as building on powerful hardware and promoting common features between the Ares I and Ares V systems to reduce technical, schedule, and cost risks, as well as development and operations costs. Finally, it summarizes several notable accomplishments since October 2005, when the Exploration Launch Projects effort officially kicked off, and looks ahead at work planned for 2007 and beyond.

  12. Dense real-time stereo matching using memory efficient semi-global-matching variant based on FPGAs

    NASA Astrophysics Data System (ADS)

    Buder, Maximilian

    2012-06-01

    This paper presents a stereo image matching system that takes advantage of a global image matching method. The system is designed to provide depth information for mobile robotic applications. Typical tasks of the proposed system are to assist in obstacle avoidance, SLAM and path planning. Mobile robots pose strong requirements about size, energy consumption, reliability and output quality of the image matching subsystem. Current available systems either rely on active sensors or on local stereo image matching algorithms. The first are only suitable in controlled environments while the second suffer from low quality depth-maps. Top ranking quality results are only achieved by an iterative approach using global image matching and color segmentation techniques which are computationally demanding and therefore difficult to be executed in realtime. Attempts were made to still reach realtime performance with global methods by simplifying the routines. The depth maps are at the end almost comparable to local methods. An equally named semi-global algorithm was proposed earlier that shows both very good image matching results and relatively simple operations. A memory efficient variant of the Semi-Global-Matching algorithm is reviewed and adopted for an implementation based on reconfigurable hardware. The implementation is suitable for realtime execution in the field of robotics. It will be shown that the modified version of the efficient Semi-Global-Matching method is delivering equivalent result compared to the original algorithm based on the Middlebury dataset. The system has proven to be capable of processing VGA sized images with a disparity resolution of 64 pixel at 33 frames per second based on low cost to mid-range hardware. In case the focus is shifted to a higher image resolution, 1024×1024-sized stereo frames may be processed with the same hardware at 10 fps. The disparity resolution settings stay unchanged. A mobile system that covers preprocessing, matching and interfacing operations is also presented.

  13. James Webb Space Telescope Integrated Science Instrument Module Thermal Vacuum Thermal Balance Test Campaign at NASA's Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Glazer, Stuart; Comber, Brian (Inventor)

    2016-01-01

    The James Webb Space Telescope is a large infrared telescope with a 6.5-meter primary mirror, designed as a successor to the Hubble Space Telescope when launched in 2018. Three of the four science instruments contained within the Integrated Science Instrument Module (ISIM) are passively cooled to their operational temperature range of 36K to 40K with radiators, and the fourth instrument is actively cooled to its operational temperature of approximately 6K. Thermal-vacuum testing of the flight science instruments at the ISIM element level has taken place in three separate highly challenging and extremely complex thermal tests within a gaseous helium-cooled shroud inside Goddard Space Flight Centers Space Environment Simulator. Special data acquisition software was developed for these tests to monitor over 1700 flight and test sensor measurements, track over 50 gradients, component rates, and temperature limits in real time against defined constraints and limitations, and guide the complex transition from ambient to final cryogenic temperatures and back. This extremely flexible system has proven highly successful in safeguarding the nearly $2B science payload during the 3.5-month-long thermal tests. Heat flow measurement instrumentation, or Q-meters, were also specially developed for these tests. These devices provide thermal boundaries o the flight hardware while measuring instrument heat loads up to 600 mW with an estimated uncertainty of 2 mW in test, enabling accurate thermal model correlation, hardware design validation, and workmanship verification. The high accuracy heat load measurements provided first evidence of a potentially serious hardware design issue that was subsequently corrected. This paper provides an overview of the ISIM-level thermal-vacuum tests and thermal objectives; explains the thermal test configuration and thermal balances; describes special measurement instrumentation and monitoring and control software; presents key test thermal results; lists problems encountered during testing and lessons learned.

  14. Development of a frequency-modulated ultrasonic sensor inspired by bat echolocation

    NASA Astrophysics Data System (ADS)

    Kepa, Krzysztof; Abaid, Nicole

    2015-03-01

    Bats have evolved to sense using ultrasonic signals with a variety of different frequency signatures which interact with their environment. Among these signals, those with time-varying frequencies may enable the animals to gather more complex information for obstacle avoidance and target tracking. Taking inspiration from this system, we present the development of a sonar sensor capable of generating frequency-modulated ultrasonic signals. The device is based on a miniature mobile computer, with on board data capture and processing capabilities, which is designed for eventual autonomous operation in a robotic swarm. The hardware and software components of the sensor are detailed, as well their integration. Preliminary results for target detection using both frequency-modulated and constant frequency signals are discussed.

  15. Aviation Communications Emulation Testbed

    NASA Technical Reports Server (NTRS)

    Sheehe, Charles; Mulkerin, Tom

    2004-01-01

    Aviation related applications that rely upon datalink for information exchange are increasingly being developed and deployed. The increase in the quantity of applications and associated data communications will expose problems and issues to resolve. NASA Glenn Research Center has prepared to study the communications issues that will arise as datalink applications are employed within the National Airspace System (NAS) by developing a aviation communications emulation testbed. The Testbed is evolving and currently provides the hardware and software needed to study the communications impact of Air Traffic Control (ATC) and surveillance applications in a densely populated environment. The communications load associated with up to 160 aircraft transmitting and receiving ATC and surveillance data can be generated in real time in a sequence similar to what would occur in the NAS.

  16. U.S. Spacesuit Legacy: Maintaining it for the Future

    NASA Technical Reports Server (NTRS)

    Chullen, Cinda; McMann, Joe; Thomas, Ken; Kosmo, Joe; Lewis, Cathleen; Wright, Rebecca; Bitterly, Rose; Olivia, Vladenka Rose

    2013-01-01

    The history of U.S. spacesuit development and its use are rich with information on lessons learned, and constitutes a valuable legacy to those designing spacesuits for the future, as well as to educators, students, and the general public. The genesis of lessons learned is best understood by studying the evolution of past spacesuit programs - how the challenges and pressures of the times influenced the direction of the various spacesuit programs. This paper shows how the legacy of various spacesuit-related programs evolved in response to these forces. Important aspects of how this U.S. spacesuit legacy is being preserved today is described, including the archiving of spacesuit hardware, important documents, videos, oral history, and the rapidly expanding U.S. Spacesuit Knowledge Capture program.

  17. Living technology: exploiting life's principles in technology.

    PubMed

    Bedau, Mark A; McCaskill, John S; Packard, Norman H; Rasmussen, Steen

    2010-01-01

    The concept of living technology-that is, technology that is based on the powerful core features of life-is explained and illustrated with examples from artificial life software, reconfigurable and evolvable hardware, autonomously self-reproducing robots, chemical protocells, and hybrid electronic-chemical systems. We define primary (secondary) living technology according as key material components and core systems are not (are) derived from living organisms. Primary living technology is currently emerging, distinctive, and potentially powerful, motivating this review. We trace living technology's connections with artificial life (soft, hard, and wet), synthetic biology (top-down and bottom-up), and the convergence of nano-, bio-, information, and cognitive (NBIC) technologies. We end with a brief look at the social and ethical questions generated by the prospect of living technology.

  18. Lunar Ultraviolet Telescope Experiment (LUTE), phase A

    NASA Technical Reports Server (NTRS)

    Mcbrayer, Robert O.

    1994-01-01

    The Lunar Ultraviolet Telescope Experiment (LUTE) is a 1-meter telescope for imaging from the lunar surface the ultraviolet spectrum between 1,000 and 3,500 angstroms. There have been several endorsements of the scientific value of a LUTE. In addition to the scientific value of LUTE, its educational value and the information it can provide on the design of operating hardware for long-term exposure in the lunar environment are important considerations. This report provides the results of the LUTE phase A activity begun at the George C. Marshall Space Flight Center in early 1992. It describes the objective of LUTE (science, engineering, and education), a feasible reference design concept that has evolved, and the subsystem trades that were accomplished during the phase A.

  19. Space shuttle electrical power generation and reactant supply system

    NASA Technical Reports Server (NTRS)

    Simon, W. E.

    1985-01-01

    The design philosophy and development experience of fuel cell power generation and cryogenic reactant supply systems are reviewed, beginning with the state of technology at the conclusion of the Apollo Program. Technology advancements span a period of 10 years from initial definition phase to the most recent space transportation system (STS) flights. The development program encompassed prototype, verification, and qualification hardware, as well as post-STS-1 design improvements. Focus is on the problems encountered, the scientific and engineering approaches employed to meet the technological challenges, and the results obtained. Major technology barriers are discussed, and the evolving technology development paths are traced from their conceptual beginnings to the fully man-rated systems which are now an integral part of the shuttle vehicle.

  20. Mobile Phones Democratize and Cultivate Next-Generation Imaging, Diagnostics and Measurement Tools

    PubMed Central

    Ozcan, Aydogan

    2014-01-01

    In this article, I discuss some of the emerging applications and the future opportunities and challenges created by the use of mobile phones and their embedded components for the development of next-generation imaging, sensing, diagnostics and measurement tools. The massive volume of mobile phone users, which has now reached ~7 billion, drives the rapid improvements of the hardware, software and high-end imaging and sensing technologies embedded in our phones, transforming the mobile phone into a cost-effective and yet extremely powerful platform to run e.g., biomedical tests and perform scientific measurements that would normally require advanced laboratory instruments. This rapidly evolving and continuing trend will help us transform how medicine, engineering and sciences are practiced and taught globally. PMID:24647550

  1. Automated microbial metabolism laboratory. [Viking 75 entry vehicle and Mars

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The labeled release concept was advanced to accommodate a post- Viking mission designed to extend the search, to confirm the presence of, and to characterize any Martian life found, and to obtain preliminary information on control of the life detected. The advanced labeled release concept utilizes four test chambers, each of which contains either an active or heat sterilized sample of the Martian soil. A variety of C-14 labeled organic substrates can be added sequentially to each soil sample and the resulting evolved radioactive gas monitored. The concept can also test effects of various inhibitors and environmental parameters on the experimental response. The current Viking '75 labeled release hardware is readily adaptable to the advanced labeled release concept.

  2. A real-time simulator of a turbofan engine

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Delaat, John C.; Merrill, Walter C.

    1989-01-01

    A real-time digital simulator of a Pratt and Whitney F100 engine has been developed for real-time code verification and for actuator diagnosis during full-scale engine testing. This self-contained unit can operate in an open-loop stand-alone mode or as part of closed-loop control system. It can also be used for control system design and development. Tests conducted in conjunction with the NASA Advanced Detection, Isolation, and Accommodation program show that the simulator is a valuable tool for real-time code verification and as a real-time actuator simulator for actuator fault diagnosis. Although currently a small perturbation model, advances in microprocessor hardware should allow the simulator to evolve into a real-time, full-envelope, full engine simulation.

  3. U.S. Spacesuit Legacy: Maintaining it for the Future

    NASA Technical Reports Server (NTRS)

    Chullen, Cinda; McMann, Joe; Thomas, Ken; Kosmo, Joe; Lewis, Cathleen; Wright, Rebecca; Bitterly, Rose; Oliva, Vladenka

    2012-01-01

    The history of US Spacesuit development and use is rich with information on lessons learned, and constitutes a valuable legacy to those designing spacesuits for the future, as well as educators, students and the general public. The genesis of lessons learned is best understood by studying the evolution of past spacesuit programs how the challenges and pressures of the times influenced the direction of the various spacesuit programs. This paper will show how the legacy of various programs evolved in response to these forces. Important aspects of how this rich U.S. spacesuit legacy is being preserved today will be described, including the archiving of spacesuit hardware, important documents, videos, oral history, and the rapidly expanding US Spacesuit Knowledge Capture program.

  4. Interplanetary laser ranging - an emerging technology for planetary science missions

    NASA Astrophysics Data System (ADS)

    Dirkx, D.; Vermeersen, L. L. A.

    2012-09-01

    Interplanetary laser ranging (ILR) is an emerging technology for very high accuracy distance determination between Earth-based stations and spacecraft or landers at interplanetary distances. It has evolved from laser ranging to Earth-orbiting satellites, modified with active laser transceiver systems at both ends of the link instead of the passive space-based retroreflectors. It has been estimated that this technology can be used for mm- to cm-level accuracy range determination at interplanetary distances [2, 7]. Work is being performed in the ESPaCE project [6] to evaluate in detail the potential and limitations of this technology by means of bottom-up laser link simulation, allowing for a reliable performance estimate from mission architecture and hardware characteristics.

  5. Proof of Concept of Impact Detection in Composites Using Fiber Bragg Grating Arrays

    PubMed Central

    Gomez, Javier; Jorge, Iagoba; Durana, Gaizka; Arrue, Jon; Zubia, Joseba; Aranguren, Gerardo; Montero, Ander; López, Ion

    2013-01-01

    Impact detection in aeronautical structures allows predicting their future reliability and performance. An impact can produce microscopic fissures that could evolve into fractures or even the total collapse of the structure, so it is important to know the location and severity of each impact. For this purpose, optical fibers with Bragg gratings are used to analyze each impact and the vibrations generated by them. In this paper it is proven that optical fibers with Bragg gratings can be used to detect impacts, and also that a high-frequency interrogator is necessary to collect valuable information about the impacts. The use of two interrogators constitutes the main novelty of this paper. PMID:24021969

  6. Challenges for Transitioning Science Knowledge to an Operational Environment for Space Weather

    NASA Technical Reports Server (NTRS)

    Spann, James

    2012-01-01

    Effectively transitioning science knowledge to an operational environment relevant to space weather is critical to meet the civilian and defense needs, especially considering how technologies are advancing and present evolving susceptibilities to space weather impacts. The effort to transition scientific knowledge to a useful application is not a research task nor is an operational activity, but an effort that bridges the two. Successful transitioning must be an intentional effort that has a clear goal for all parties and measureable outcome and deliverable. This talk will present proven methodologies that have been demonstrated to be effective for terrestrial weather and disaster relief efforts, and how those methodologies can be applied to space weather transition efforts.

  7. Best of Breed

    NASA Technical Reports Server (NTRS)

    Lohn, Jason

    2004-01-01

    No team of engineers, no matter how much time they took or how many bottles of cabernet they consumed, would dream up an antenna that looked like a deer antler on steroids. Yet that's what a group at NASA Ames Research Center came up with-thanks to a little help from Darwin. NASA's Space Technology 5 nanosatellites, which are scheduled to start measuring Earth's magnetosphere in late 2004, requires an antenna that can receive a wide range of frequencies regardless of the spacecraft's orientation. Rather than leave such exacting requirements in the hands of a human, the engineers decided to breed a design using genetic algorithms and 32 Linux PCs. The computers generated small antenna-constructing programs (the genotypes) and executed them to produce designs (the phenotypes). Then the designs were evaluated using an antenna simulator. The team settled on the form pictured here. You won't find this kind of antenna in any textbook, design guide, or research paper. But its innovative structure meets a challenging set of specifications. If successfully deployed, it will be the first evolved antenna to make it out of the lab and the first piece of evolved hardware ever to fly in space.

  8. Man/computer communication in a space environment

    NASA Technical Reports Server (NTRS)

    Hodges, B. C.; Montoya, G.

    1973-01-01

    The present work reports on a study of the technology required to advance the state of the art in man/machine communications. The study involved the development and demonstration of both hardware and software to effectively implement man/computer interactive channels of communication. While tactile and visual man/computer communications equipment are standard methods of interaction with machines, man's speech is a natural media for inquiry and control. As part of this study, a word recognition unit was developed capable of recognizing a minimum of one hundred different words or sentences in any one of the currently used conversational languages. The study has proven that efficiency in communication between man and computer can be achieved when the vocabulary to be used is structured in a manner compatible with the rigid communication requirements of the machine while at the same time responsive to the informational needs of the man.

  9. STARPAHC Interim Evaluation Report, May 1975 - April 1976

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The primary goals of the STARPAHC Program are to provide data for developing health care for future manned spacecraft, and to establish the feasibility of the STARPAHC concept for improving the delivery of health care to remote areas on earth. Accordingly, the hardware and medical evaluations initiated during the first 6 months of system operation were continued and expanded during the second 6-month period. The evaluations are based on what has proven to be a relatively stabilized 6-month period wherein system failures which occurred during the initial shakedown period in the first 6 months have been minimized. Early trends and performance data reported in the first semi-annual report were reexamined to either verify, modify or change earlier conclusions. The highlights are given of the total year of operation with emphasis on comparisons between the first and second semi-annual reporting period. In addition, an early analysis of costs is summarized.

  10. An arduino based control system for a brackish water desalination plant

    NASA Astrophysics Data System (ADS)

    Caraballo, Ginna

    Water scarcity for agriculture is one of the most important challenges to improve food security worldwide. In this thesis we study the potential to develop a low-cost controller for a small scale brackish desalination plant that consists of proven water treatment technologies, reverse osmosis, cation exchange, and nanofiltration to treat groundwater into two final products: drinking water and irrigation water. The plant is powered by a combination of wind and solar power systems. The low-cost controller uses Arduino Mega, and Arduino DUE, which consist of ATmega2560 and Atmel SAM3X8E ARM Cortex-M3 CPU microcontrollers. These are widely used systems characterized for good performance and low cost. However, Arduino also requires drivers and interfaces to allow the control and monitoring of sensors and actuators. The thesis explains the process, as well as the hardware and software implemented.

  11. Delta capability for launch of communications satellites

    NASA Technical Reports Server (NTRS)

    Grimes, D. W.; Russell, W. A., Jr.; Kraft, J. D.

    1982-01-01

    The evolution of capabilities and the current performance levels of the Delta launch vehicle are outlined. The first payload was the Echo I passive communications satellite, weighing 179 lb, and placed in GEO in 1960. Emphasis since then has been to use off-the-shelf hardware where feasible. The latest version in the 3924 first stage, 3920 second stage, and Pam D apogee kick motor third stage. The Delta is presently equipped to place 2800 lb in GEO, as was proven with the 2717 lb Anik-D1 satellite. The GEO payload placement performance matches the Shuttle's, and work is therefore under way to enhance the Delta performance to handle more massive payloads. Installation of the Castor-IV solid motor separation system, thereby saving mass by utilizing compressed nitrogen, rather than mechanical thrusters to remove the strap-on boosters, is indicated, together with use of a higher performance propellant and a wider nose fairing.

  12. EVA Design, Verification, and On-Orbit Operations Support Using Worksite Analysis

    NASA Technical Reports Server (NTRS)

    Hagale, Thomas J.; Price, Larry R.

    2000-01-01

    The International Space Station (ISS) design is a very large and complex orbiting structure with thousands of Extravehicular Activity (EVA) worksites. These worksites are used to assemble and maintain the ISS. The challenge facing EVA designers was how to design, verify, and operationally support such a large number of worksites within cost and schedule. This has been solved through the practical use of computer aided design (CAD) graphical techniques that have been developed and used with a high degree of success over the past decade. The EVA design process allows analysts to work concurrently with hardware designers so that EVA equipment can be incorporated and structures configured to allow for EVA access and manipulation. Compliance with EVA requirements is strictly enforced during the design process. These techniques and procedures, coupled with neutral buoyancy underwater testing, have proven most valuable in the development, verification, and on-orbit support of planned or contingency EVA worksites.

  13. Accessible high-throughput virtual screening molecular docking software for students and educators.

    PubMed

    Jacob, Reed B; Andersen, Tim; McDougal, Owen M

    2012-05-01

    We survey low cost high-throughput virtual screening (HTVS) computer programs for instructors who wish to demonstrate molecular docking in their courses. Since HTVS programs are a useful adjunct to the time consuming and expensive wet bench experiments necessary to discover new drug therapies, the topic of molecular docking is core to the instruction of biochemistry and molecular biology. The availability of HTVS programs coupled with decreasing costs and advances in computer hardware have made computational approaches to drug discovery possible at institutional and non-profit budgets. This paper focuses on HTVS programs with graphical user interfaces (GUIs) that use either DOCK or AutoDock for the prediction of DockoMatic, PyRx, DockingServer, and MOLA since their utility has been proven by the research community, they are free or affordable, and the programs operate on a range of computer platforms.

  14. Color sensor and neural processor on one chip

    NASA Astrophysics Data System (ADS)

    Fiesler, Emile; Campbell, Shannon R.; Kempem, Lother; Duong, Tuan A.

    1998-10-01

    Low-cost, compact, and robust color sensor that can operate in real-time under various environmental conditions can benefit many applications, including quality control, chemical sensing, food production, medical diagnostics, energy conservation, monitoring of hazardous waste, and recycling. Unfortunately, existing color sensor are either bulky and expensive or do not provide the required speed and accuracy. In this publication we describe the design of an accurate real-time color classification sensor, together with preprocessing and a subsequent neural network processor integrated on a single complementary metal oxide semiconductor (CMOS) integrated circuit. This one-chip sensor and information processor will be low in cost, robust, and mass-producible using standard commercial CMOS processes. The performance of the chip and the feasibility of its manufacturing is proven through computer simulations based on CMOS hardware parameters. Comparisons with competing methodologies show a significantly higher performance for our device.

  15. Experiences in Delta mission planning

    NASA Technical Reports Server (NTRS)

    Kork, J.

    1981-01-01

    The Delta launch vehicle has experienced 153 successful launches since 1960 and 40 more are scheduled. Relying on up-to-date technology and proven flight hardware, the Delta vehicle has been used for low to high circular and geosynchronous transfer orbits, high elliptic probes, and lunar and planetary missions. A history of Delta launches and configuration modifications is presented, noting a 92-95% success rate and its cost effective role in reimbursable missions. Elements of mission planning such as feasibility studies (1-3 yrs), spacecraft restraints manuals, reference trajectories, preliminary mission analysis, detailed test objectives, range/safety studies, guided nominal trajectory, and mission specific studies are discussed. Trajectory shaping determines vehicle and spacecraft restraints, optimizes the trajectory, and maximizes the payload capabilities. Improvements in the Delta vehicle have boosted payloads from 100 to 2890 lbs., improving the price per pound ratio, as costs have risen, only by a factor of three. Current launch schedules extend well into 1985.

  16. Wetting properties of Au/Sn solders for microelectronics

    NASA Astrophysics Data System (ADS)

    Peterson, K. A.; Williams, C. B.

    Hermetic sealing of microelectronic packages with Au/Sn solder is critically dependent upon good wetting. In studying specific problems in hermetic sealing, a solderability test based on ASTM standard F-357-78 has proven useful. The test has helped isolate and quantify the effects of contamination due to epoxy die attach and related handling, thermal preconditioning of packages, gold plating thickness, time and temperature during sealing, and solder alloy composition as they affect wetting. Some differences in hardware have been documented between manufacturing lots, but the overriding factors have been contamination which occurs during packaging process flows and thermal preconditioning during processing. The paper includes a review of metallurgical aspects of soldering to a non-inert surface and an examination of microstructural differences in seal joints. The results also quantify the conventional wisdom that alloys which are on the tin-rich side of the eutectic composition offer superior wetting properties.

  17. Temperature-Time Issues in Bioburden Control for Planetary Protection

    NASA Astrophysics Data System (ADS)

    Clark, B.

    Heat energy, administered in the form of an elevated temperature heat soak over a specific interval of time, is a well-known method of inactivating organisms. Ster- ilization protocols, from commercial pasteurization to laboratory autoclaving, specify both the temperature and the time, as well as water activity, for treatments to achieve either acceptable reduction of bioburden or complete sterilization. In practical applications of planetary protection, whether to reduce spore load in for- ward or roundtrip contamination, or to exterminate putative organisms in returned samples from planetary bodies suspected of possible life, avoidance of expensive or potentially damaging treatments of hardware (or samples) could be accomplished if reciprocal relationships between time duration and soak temperature could be established. Conservative rules can be developed from consideration of empirical test data, derived relationships, current standards and various theoretical or proven mechanisms for thermal damage to biological systems.

  18. Multimission helicopter information display technology

    NASA Astrophysics Data System (ADS)

    Terry, William S.

    1995-06-01

    A new Operator display subsystem is being incorporated as part of the next generation United States Navy (USN) helicopter avionics system to be integrated into the Multi-Mission Helicopter (MMH) which will replace both the SH-60B and the SH- 60F in 2001. This subsystem exploits state-of-the-art technology for the display hardware, the display driver hardware, information presentation methodologies, and software architecture. The technologies to be base technologies have evolved during the development period and the solution has been modified to include current elements including high resolution AMLCD color displays that are sunlight readable, highly reliable, and significantly lighter that CRT technology, as well as Reduced Instruction Set Computer (RISC) based high-performance display generators that have only recently become feasible to implement in a military aircraft. This paper describes the overall subsystem architecture, some detail on the individual elements along with supporting rationale, the manner in which the display subsystem provides the necessary tools to significantly enhance the performance of the weapon system through the vital Operator-System Interface. Also addressed is a summary of the evolution of design leading to the current approach to MMH Operator displays and display processing as well as the growth path that the MMH display subsystem will most likely follow as additional technology evolution occurs.

  19. Circuit Design Optimization Using Genetic Algorithm with Parameterized Uniform Crossover

    NASA Astrophysics Data System (ADS)

    Bao, Zhiguo; Watanabe, Takahiro

    Evolvable hardware (EHW) is a new research field about the use of Evolutionary Algorithms (EAs) to construct electronic systems. EHW refers in a narrow sense to use evolutionary mechanisms as the algorithmic drivers for system design, while in a general sense to the capability of the hardware system to develop and to improve itself. Genetic Algorithm (GA) is one of typical EAs. We propose optimal circuit design by using GA with parameterized uniform crossover (GApuc) and with fitness function composed of circuit complexity, power, and signal delay. Parameterized uniform crossover is much more likely to distribute its disruptive trials in an unbiased manner over larger portions of the space, then it has more exploratory power than one and two-point crossover, so we have more chances of finding better solutions. Its effectiveness is shown by experiments. From the results, we can see that the best elite fitness, the average value of fitness of the correct circuits and the number of the correct circuits of GApuc are better than that of GA with one-point crossover or two-point crossover. The best case of optimal circuits generated by GApuc is 10.18% and 6.08% better in evaluating value than that by GA with one-point crossover and two-point crossover, respectively.

  20. Cloud computing approaches to accelerate drug discovery value chain.

    PubMed

    Garg, Vibhav; Arora, Suchir; Gupta, Chitra

    2011-12-01

    Continued advancements in the area of technology have helped high throughput screening (HTS) evolve from a linear to parallel approach by performing system level screening. Advanced experimental methods used for HTS at various steps of drug discovery (i.e. target identification, target validation, lead identification and lead validation) can generate data of the order of terabytes. As a consequence, there is pressing need to store, manage, mine and analyze this data to identify informational tags. This need is again posing challenges to computer scientists to offer the matching hardware and software infrastructure, while managing the varying degree of desired computational power. Therefore, the potential of "On-Demand Hardware" and "Software as a Service (SAAS)" delivery mechanisms cannot be denied. This on-demand computing, largely referred to as Cloud Computing, is now transforming the drug discovery research. Also, integration of Cloud computing with parallel computing is certainly expanding its footprint in the life sciences community. The speed, efficiency and cost effectiveness have made cloud computing a 'good to have tool' for researchers, providing them significant flexibility, allowing them to focus on the 'what' of science and not the 'how'. Once reached to its maturity, Discovery-Cloud would fit best to manage drug discovery and clinical development data, generated using advanced HTS techniques, hence supporting the vision of personalized medicine.

  1. N-body simulations of star clusters

    NASA Astrophysics Data System (ADS)

    Engle, Kimberly Anne

    1999-10-01

    We investigate the structure and evolution of underfilling (i.e. non-Roche-lobe-filling) King model globular star clusters using N-body simulations. We model clusters with various underfilling factors and mass distributions to determine their evolutionary tracks and lifetimes. These models include a self-consistent galactic tidal field, mass loss due to stellar evolution, ejection, and evaporation, and binary evolution. We find that a star cluster that initially does not fill its Roche lobe can live many times longer than one that does initially fill its Roche lobe. After a few relaxation times, the cluster expands to fill its Roche lobe. We also find that the choice of initial mass function significantly affects the lifetime of the cluster. These simulations were performed on the GRAPE-4 (GRAvity PipE) special-purpose hardware with the stellar dynamics package ``Starlab.'' The GRAPE-4 system is a massively-parallel computer designed to calculate the force (and its first time derivative) due to N particles. Starlab's integrator ``kira'' employs a 4th- order Hermite scheme with hierarchical (block) time steps to evolve the stellar system. We discuss, in some detail, the design of the GRAPE-4 system and the manner in which the Hermite integration scheme with block time steps is implemented in the hardware.

  2. Crossing the chasm: how to develop weather and climate models for next generation computers?

    NASA Astrophysics Data System (ADS)

    Lawrence, Bryan N.; Rezny, Michael; Budich, Reinhard; Bauer, Peter; Behrens, Jörg; Carter, Mick; Deconinck, Willem; Ford, Rupert; Maynard, Christopher; Mullerworth, Steven; Osuna, Carlos; Porter, Andrew; Serradell, Kim; Valcke, Sophie; Wedi, Nils; Wilson, Simon

    2018-05-01

    Weather and climate models are complex pieces of software which include many individual components, each of which is evolving under pressure to exploit advances in computing to enhance some combination of a range of possible improvements (higher spatio-temporal resolution, increased fidelity in terms of resolved processes, more quantification of uncertainty, etc.). However, after many years of a relatively stable computing environment with little choice in processing architecture or programming paradigm (basically X86 processors using MPI for parallelism), the existing menu of processor choices includes significant diversity, and more is on the horizon. This computational diversity, coupled with ever increasing software complexity, leads to the very real possibility that weather and climate modelling will arrive at a chasm which will separate scientific aspiration from our ability to develop and/or rapidly adapt codes to the available hardware. In this paper we review the hardware and software trends which are leading us towards this chasm, before describing current progress in addressing some of the tools which we may be able to use to bridge the chasm. This brief introduction to current tools and plans is followed by a discussion outlining the scientific requirements for quality model codes which have satisfactory performance and portability, while simultaneously supporting productive scientific evolution. We assert that the existing method of incremental model improvements employing small steps which adjust to the changing hardware environment is likely to be inadequate for crossing the chasm between aspiration and hardware at a satisfactory pace, in part because institutions cannot have all the relevant expertise in house. Instead, we outline a methodology based on large community efforts in engineering and standardisation, which will depend on identifying a taxonomy of key activities - perhaps based on existing efforts to develop domain-specific languages, identify common patterns in weather and climate codes, and develop community approaches to commonly needed tools and libraries - and then collaboratively building up those key components. Such a collaborative approach will depend on institutions, projects, and individuals adopting new interdependencies and ways of working.

  3. The J-2X Upper Stage Engine: From Heritage to Hardware

    NASA Technical Reports Server (NTRS)

    Byrd, THomas

    2008-01-01

    NASA's Global Exploration Strategy requires safe, reliable, robust, efficient transportation to support sustainable operations from Earth to orbit and into the far reaches of the solar system. NASA selected the Ares I crew launch vehicle and the Ares V cargo launch vehicle to provide that transportation. Guiding principles in creating the architecture represented by the Ares vehicles were the maximum use of heritage hardware and legacy knowledge, particularly Space Shuttle assets, and commonality between the Ares vehicles where possible to streamline the hardware development approach and reduce programmatic, technical, and budget risks. The J-2X exemplifies those goals. It was selected by the Exploration Systems Architecture Study (ESAS) as the upper stage propulsion for the Ares I Upper Stage and the Ares V Earth Departure Stage (EDS). The J-2X is an evolved version ofthe historic J-2 engine that successfully powered the second stage of the Saturn I launch vehicle and the second and third stages of the Saturn V launch vehicle. The Constellation architecture, however, requires performance greater than its predecessor. The new architecture calls for larger payloads delivered to the Moon and demands greater loss of mission reliability and numerous other requirements associated with human rating that were not applied to the original J-2. As a result, the J-2X must operate at much higher temperatures, pressures, and flow rates than the heritage J-2, making it one of the highest performing gas generator cycle engines ever built, approaching the efficiency of more complex stage combustion engines. Development is focused on early risk mitigation, component and subassembly test, and engine system test. The development plans include testing engine components, including the subscale injector, main igniter, powerpack assembly (turbopumps, gas generator and associated ducting and structural mounts), full-scale gas generator, valves, and control software with hardware-in-the-loop. Testing expanded in 2007, accompanied by the refinement of the design through several key milestones. This paper discusses those 2007 tests and milestones, as well as updates key developments in 2008.

  4. A Framework for the Development of Scalable Heterogeneous Robot Teams with Dynamically Distributed Processing

    NASA Astrophysics Data System (ADS)

    Martin, Adrian

    As the applications of mobile robotics evolve it has become increasingly less practical for researchers to design custom hardware and control systems for each problem. This research presents a new approach to control system design that looks beyond end-of-lifecycle performance and considers control system structure, flexibility, and extensibility. Toward these ends the Control ad libitum philosophy is proposed, stating that to make significant progress in the real-world application of mobile robot teams the control system must be structured such that teams can be formed in real-time from diverse components. The Control ad libitum philosophy was applied to the design of the HAA (Host, Avatar, Agent) architecture: a modular hierarchical framework built with provably correct distributed algorithms. A control system for exploration and mapping, search and deploy, and foraging was developed to evaluate the architecture in three sets of hardware-in-the-loop experiments. First, the basic functionality of the HAA architecture was studied, specifically the ability to: a) dynamically form the control system, b) dynamically form the robot team, c) dynamically form the processing network, and d) handle heterogeneous teams. Secondly, the real-time performance of the distributed algorithms was tested, and proved effective for the moderate sized systems tested. Furthermore, the distributed Just-in-time Cooperative Simultaneous Localization and Mapping (JC-SLAM) algorithm demonstrated accuracy equal to or better than traditional approaches in resource starved scenarios, while reducing exploration time significantly. The JC-SLAM strategies are also suitable for integration into many existing particle filter SLAM approaches, complementing their unique optimizations. Thirdly, the control system was subjected to concurrent software and hardware failures in a series of increasingly complex experiments. Even with unrealistically high rates of failure the control system was able to successfully complete its tasks. The HAA implementation designed following the Control ad libitum philosophy proved to be capable of dynamic team formation and extremely robust against both hardware and software failure; and, due to the modularity of the system there is significant potential for reuse of assets and future extensibility. One future goal is to make the source code publically available and establish a forum for the development and exchange of new agents.

  5. Performance Evaluation of Analog Beamforming with Hardware Impairments for mmW Massive MIMO Communication in an Urban Scenario.

    PubMed

    Gimenez, Sonia; Roger, Sandra; Baracca, Paolo; Martín-Sacristán, David; Monserrat, Jose F; Braun, Volker; Halbauer, Hardy

    2016-09-22

    The use of massive multiple-input multiple-output (MIMO) techniques for communication at millimeter-Wave (mmW) frequency bands has become a key enabler to meet the data rate demands of the upcoming fifth generation (5G) cellular systems. In particular, analog and hybrid beamforming solutions are receiving increasing attention as less expensive and more power efficient alternatives to fully digital precoding schemes. Despite their proven good performance in simple setups, their suitability for realistic cellular systems with many interfering base stations and users is still unclear. Furthermore, the performance of massive MIMO beamforming and precoding methods are in practice also affected by practical limitations and hardware constraints. In this sense, this paper assesses the performance of digital precoding and analog beamforming in an urban cellular system with an accurate mmW channel model under both ideal and realistic assumptions. The results show that analog beamforming can reach the performance of fully digital maximum ratio transmission under line of sight conditions and with a sufficient number of parallel radio-frequency (RF) chains, especially when the practical limitations of outdated channel information and per antenna power constraints are considered. This work also shows the impact of the phase shifter errors and combiner losses introduced by real phase shifter and combiner implementations over analog beamforming, where the former ones have minor impact on the performance, while the latter ones determine the optimum number of RF chains to be used in practice.

  6. U-Pb ages and Hf isotope compositions of zircons in plutonic rocks from the central Famatinian arc, Argentina

    NASA Astrophysics Data System (ADS)

    Otamendi, Juan E.; Ducea, Mihai N.; Cristofolini, Eber A.; Tibaldi, Alina M.; Camilletti, Giuliano C.; Bergantz, George W.

    2017-07-01

    The Famatinian arc formed around the South Iapetus rim during the Ordovician, when oceanic lithosphere subducted beneath the West Gondwana margin. We present combined in situ U-Th-Pb and Lu-Hf isotope analyses for zircon to gain insights into the origin and evolution of Famatinian magmatism. Zircon crystals sampled from four intermediate and silicic plutonic rocks confirm previous observations showing that voluminous magmatism took place during a relatively short pulse between the Early and Middle Ordovician (472-465 Ma). The entire zircon population for the four plutonic rocks yields coherent εHf negative values and spreads over several ranges of initial εHf(t) units (-0.3 to -8.0). The range of εHf units in detrital zircons of Famatinian metasedimentary rocks reflects a prolonged history of the cratonic sources during the Proterozoic to the earliest Phanerozoic. Typical tonalites and granodiorites that contain zircons with evolved Hf isotopic compositions formed upon incorporating (meta)sedimentary materials into calc-alkaline metaluminous magmas. The evolved Hf isotope ratios of zircons in the subduction related plutonic rocks strongly reflect the Hf isotopic character of the metasedimentary contaminant, even though the linked differentiation and growth of the Famatinian arc crust was driven by ascending and evolving mantle magmas. Geochronology and Hf isotope systematics in plutonic zircons allow us understanding the petrogenesis of igneous series and the provenance of magma sources. However, these data could be inadequate for computing model ages and supporting models of crustal evolution.

  7. Detrital zircon age and isotopic constraints on the provenance of turbidites from the southernmost part of the Beishan orogen, NW China

    NASA Astrophysics Data System (ADS)

    Guo, Q. Q.; Chung, S. L.; Lee, H. Y.; Xiao, W.; Hou, Q.; Li, S.

    2017-12-01

    The Altaids in Central and East Asia is one of the largest accretionary orogenic collages in the world. The Beishan orogen, linked the Tianshan and Xingmeng orogens, occupy a key position to trace the terminal processes of the Altaids. It comprises an assemblage of magmatic arcs and ophiolitic mélanges. The Permian clastic turbidites, situated between the Huaniushan arc and the Shibanshan arc, are the youngest reported deep-marine clasts in the Beishan orogen. They are separated into the Liuyuan turbidites (NT) to the north and the Heishankou turbidites (ST) to the south by the Liuyuan complex. Detrital zircon grains from the NT yielded a wide range, from 254-3111 Ma, with two age clusters at 273 Ma and 424 Ma, indicating they provenance from the Huaniushan arc to the north. Those from the ST yielded ages from 260-2209 Ma, with age clusters at 270 Ma, 295 Ma, 420 Ma and 878 Ma, indicating the provenance from the Shibanshan arc to the south. The youngest three grains from the NT yield a weighted mean age of 260 Ma and those from the ST an age of 255 Ma, indicating an End-Permian maximum depositional age. The Precambrian zircons of the NT have diverse ɛHf(t) values (-12.6 to +10.4), while those of the ST from -6 to -2.6, indicating distinguishing histories of their provenances. The NT have more positive ɛNd(t) values than the ST, suggesting more juvenile or less evolved crustal components in the source. Two contrasting provenances, together with data in the literature, define the latest suture in the Beishan region at 240-250 Ma. The younger peak of U-Pb analysis results of detrital zircons from the northern part of the final suture zone in the southern Altaids is younging eastward from 288 Ma to 247 Ma, which may characterize the closure of the Paleo-Asian Ocean from west to east in about 40 Ma. This identification of the latest suture in the southern Altaids provides new constraints on the Paleo-Asian Ocean - specifically the nature and timing of the end of the subduction - but also on the amalgamation of the super continental of Eurasia that consists of micro blocks with a variety of histories.

  8. Evolving spatio-temporal data machines based on the NeuCube neuromorphic framework: Design methodology and selected applications.

    PubMed

    Kasabov, Nikola; Scott, Nathan Matthew; Tu, Enmei; Marks, Stefan; Sengupta, Neelava; Capecci, Elisa; Othman, Muhaini; Doborjeh, Maryam Gholami; Murli, Norhanifah; Hartono, Reggio; Espinosa-Ramos, Josafath Israel; Zhou, Lei; Alvi, Fahad Bashir; Wang, Grace; Taylor, Denise; Feigin, Valery; Gulyaev, Sergei; Mahmoud, Mahmoud; Hou, Zeng-Guang; Yang, Jie

    2016-06-01

    The paper describes a new type of evolving connectionist systems (ECOS) called evolving spatio-temporal data machines based on neuromorphic, brain-like information processing principles (eSTDM). These are multi-modular computer systems designed to deal with large and fast spatio/spectro temporal data using spiking neural networks (SNN) as major processing modules. ECOS and eSTDM in particular can learn incrementally from data streams, can include 'on the fly' new input variables, new output class labels or regression outputs, can continuously adapt their structure and functionality, can be visualised and interpreted for new knowledge discovery and for a better understanding of the data and the processes that generated it. eSTDM can be used for early event prediction due to the ability of the SNN to spike early, before whole input vectors (they were trained on) are presented. A framework for building eSTDM called NeuCube along with a design methodology for building eSTDM using this is presented. The implementation of this framework in MATLAB, Java, and PyNN (Python) is presented. The latter facilitates the use of neuromorphic hardware platforms to run the eSTDM. Selected examples are given of eSTDM for pattern recognition and early event prediction on EEG data, fMRI data, multisensory seismic data, ecological data, climate data, audio-visual data. Future directions are discussed, including extension of the NeuCube framework for building neurogenetic eSTDM and also new applications of eSTDM. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Verification Challenges of Dynamic Testing of Space Flight Hardware

    NASA Technical Reports Server (NTRS)

    Winnitoy, Susan

    2010-01-01

    The Six Degree-of-Freedom Dynamic Test System (SDTS) is a test facility at the National Aeronautics and Space Administration (NASA) Johnson Space Center in Houston, Texas for performing dynamic verification of space structures and hardware. Some examples of past and current tests include the verification of on-orbit robotic inspection systems, space vehicle assembly procedures and docking/berthing systems. The facility is able to integrate a dynamic simulation of on-orbit spacecraft mating or demating using flight-like mechanical interface hardware. A force moment sensor is utilized for input to the simulation during the contact phase, thus simulating the contact dynamics. While the verification of flight hardware presents many unique challenges, one particular area of interest is with respect to the use of external measurement systems to ensure accurate feedback of dynamic contact. There are many commercial off-the-shelf (COTS) measurement systems available on the market, and the test facility measurement systems have evolved over time to include two separate COTS systems. The first system incorporates infra-red sensing cameras, while the second system employs a laser interferometer to determine position and orientation data. The specific technical challenges with the measurement systems in a large dynamic environment include changing thermal and humidity levels, operational area and measurement volume, dynamic tracking, and data synchronization. The facility is located in an expansive high-bay area that is occasionally exposed to outside temperature when large retractable doors at each end of the building are opened. The laser interferometer system, in particular, is vulnerable to the environmental changes in the building. The operational area of the test facility itself is sizeable, ranging from seven meters wide and five meters deep to as much as seven meters high. Both facility measurement systems have desirable measurement volumes and the accuracies vary within the respective volumes. In addition, because this is a dynamic facility with a moving test bed, direct line-of-sight may not be available at all times between the measurement sensors and the tracking targets. Finally, the feedback data from the active test bed along with the two external measurement systems must be synchronized to allow for data correlation. To ensure the desired accuracy and resolution of these systems, calibration of the systems must be performed regularly. New innovations in sensor technology itself are periodically incorporated into the facility s overall measurement scheme. In addressing the challenges of the measurement systems, the facility is able to provide essential position and orientation data to verify the dynamic performance of space flight hardware.

  10. Scalable geocomputation: evolving an environmental model building platform from single-core to supercomputers

    NASA Astrophysics Data System (ADS)

    Schmitz, Oliver; de Jong, Kor; Karssenberg, Derek

    2017-04-01

    There is an increasing demand to run environmental models on a big scale: simulations over large areas at high resolution. The heterogeneity of available computing hardware such as multi-core CPUs, GPUs or supercomputer potentially provides significant computing power to fulfil this demand. However, this requires detailed knowledge of the underlying hardware, parallel algorithm design and the implementation thereof in an efficient system programming language. Domain scientists such as hydrologists or ecologists often lack this specific software engineering knowledge, their emphasis is (and should be) on exploratory building and analysis of simulation models. As a result, models constructed by domain specialists mostly do not take full advantage of the available hardware. A promising solution is to separate the model building activity from software engineering by offering domain specialists a model building framework with pre-programmed building blocks that they combine to construct a model. The model building framework, consequently, needs to have built-in capabilities to make full usage of the available hardware. Developing such a framework providing understandable code for domain scientists and being runtime efficient at the same time poses several challenges on developers of such a framework. For example, optimisations can be performed on individual operations or the whole model, or tasks need to be generated for a well-balanced execution without explicitly knowing the complexity of the domain problem provided by the modeller. Ideally, a modelling framework supports the optimal use of available hardware whichsoever combination of model building blocks scientists use. We demonstrate our ongoing work on developing parallel algorithms for spatio-temporal modelling and demonstrate 1) PCRaster, an environmental software framework (http://www.pcraster.eu) providing spatio-temporal model building blocks and 2) parallelisation of about 50 of these building blocks using the new Fern library (https://github.com/geoneric/fern/), an independent generic raster processing library. Fern is a highly generic software library and its algorithms can be configured according to the configuration of a modelling framework. With manageable programming effort (e.g. matching data types between programming and domain language) we created a binding between Fern and PCRaster. The resulting PCRaster Python multicore module can be used to execute existing PCRaster models without having to make any changes to the model code. We show initial results on synthetic and geoscientific models indicating significant runtime improvements provided by parallel local and focal operations. We further outline challenges in improving remaining algorithms such as flow operations over digital elevation maps and further potential improvements like enhancing disk I/O.

  11. New insights in diagnosis and treatment for Retinopathy of Prematurity.

    PubMed

    Cernichiaro-Espinosa, Linda A; Olguin-Manriquez, Francisco J; Henaine-Berra, Andree; Garcia-Aguirre, Gerardo; Quiroz-Mercado, Hugo; Martinez-Castellanos, Maria A

    2016-10-01

    The purpose of this study was to review current perspectives on diagnosis and treatment of Retinopathy of Prematurity (ROP). We performed a systematic review of how much has been produced in research published online and on print regarding ROP in different settings around the world. Early Treatment for ROP (ETROP) classification is the currently accepted classification of ROP. Fluorescein angiography and spectral domain optical coherence tomography (SD-OCT) may eventually lead to changes in the definition of ROP, and as a consequence, they will serve as a guide for treatment. Intravitreal anti-VEGF therapy has proven to be more effective in terms of lowering recurrence, allowing growth of the peripheral retina, and diminishing the incidence of retinal detachment when proliferative ROP is diagnosed. Whether anti-VEGF plus laser are better than any of these therapies separately remains a subject of discussion. Telemedicine is evolving everyday to allow access to remote areas that do not count with a retina specialist for treatment. A management algorithm is proposed according to our reference center experience. ROP is an evolving subject, with a vulnerable population of study that, once treated with good results, leads to a reduction in visual disability and in consequence, in a lifetime improvement.

  12. A case study in evolutionary contingency.

    PubMed

    Blount, Zachary D

    2016-08-01

    Biological evolution is a fundamentally historical phenomenon in which intertwined stochastic and deterministic processes shape lineages with long, continuous histories that exist in a changing world that has a history of its own. The degree to which these characteristics render evolution historically contingent, and evolutionary outcomes thereby unpredictably sensitive to history has been the subject of considerable debate in recent decades. Microbial evolution experiments have proven among the most fruitful means of empirically investigating the issue of historical contingency in evolution. One such experiment is the Escherichia coli Long-Term Evolution Experiment (LTEE), in which twelve populations founded from the same clone of E. coli have evolved in parallel under identical conditions. Aerobic growth on citrate (Cit(+)), a novel trait for E. coli, evolved in one of these populations after more than 30,000 generations. Experimental replays of this population's evolution from various points in its history showed that the Cit(+) trait was historically contingent upon earlier mutations that potentiated the trait by rendering it mutationally accessible. Here I review this case of evolutionary contingency and discuss what it implies about the importance of historical contingency arising from the core processes of evolution. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. A primer on precision medicine informatics.

    PubMed

    Sboner, Andrea; Elemento, Olivier

    2016-01-01

    In this review, we describe key components of a computational infrastructure for a precision medicine program that is based on clinical-grade genomic sequencing. Specific aspects covered in this review include software components and hardware infrastructure, reporting, integration into Electronic Health Records for routine clinical use and regulatory aspects. We emphasize informatics components related to reproducibility and reliability in genomic testing, regulatory compliance, traceability and documentation of processes, integration into clinical workflows, privacy requirements, prioritization and interpretation of results to report based on clinical needs, rapidly evolving knowledge base of genomic alterations and clinical treatments and return of results in a timely and predictable fashion. We also seek to differentiate between the use of precision medicine in germline and cancer. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  14. Evolving Reliability and Maintainability Allocations for NASA Ground Systems

    NASA Technical Reports Server (NTRS)

    Munoz, Gisela; Toon, T.; Toon, J.; Conner, A.; Adams, T.; Miranda, D.

    2016-01-01

    This paper describes the methodology and value of modifying allocations to reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) programs subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. This iterative process provided an opportunity for the reliability engineering team to reevaluate allocations as systems moved beyond their conceptual and preliminary design phases. These new allocations are based on updated designs and maintainability characteristics of the components. It was found that trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper discusses the results of reliability and maintainability reallocations made for the GSDO subsystems as the program nears the end of its design phase.

  15. ISS ECLSS Technology Evolution for Exploration

    NASA Technical Reports Server (NTRS)

    Carrasquillo, Robyn L.

    2005-01-01

    The baseline environmental control and life support systems (ECLSS) currently deployed on the International Space Station (ISS) and the regenerative oxygen generation and water reclamation systems to be added in 2008 are based on technologies selected during the early 1990's. While they are generally meeting, or exceeding requirements for supporting the ISS crew, lessons learned from hardware development and on orbit experience, together with advances in technology state of the art, and the unique requirements for future manned exploration missions prompt consideration of the next steps to be taken to evolve these technologies to improve robustness and reliability, enhance performance, and reduce resource requirements such as power and logistics upmass. This paper discusses the current state of ISS ECLSS technology and identifies possible areas for evolutionary enhancement or improvement.

  16. Evolution of the IBDM Structural Latch Development into a Generic Simplified Design

    NASA Technical Reports Server (NTRS)

    DeVriendt, K.; Dittmer, H.; Vrancken, D.; Urmston, P.; Gracia, O.

    2010-01-01

    This paper presents the evolution in the development of the structural latch for the International Berthing Docking Mechanism (IBDM, see Figure 1). It reports on the lessons learned since completion of the test program on the engineering development unit of the first generation latching system in 2007. The initial latch design has been through a second generation concept in 2008, and now evolved into a third generation of this mechanism. Functional and structural testing on the latest latch hardware has recently been completed with good results. The IBDM latching system will provide the structural connection between two mated space vehicles after berthing or docking. The mechanism guarantees that the interface seals become compressed to form a leak-tight pressure system that creates a passageway for the astronauts.

  17. NASA Space Launch System: An Enabling Capability for Discovery

    NASA Technical Reports Server (NTRS)

    Creech, Stephen D.

    2014-01-01

    SLS provides capability for human exploration missions. 70 t configuration enables EM-1 and EM-2 flight tests. Evolved configurations enable missions including humans to Mars. u? SLS offers unrivaled benefits for a variety of missions. 70 t provides greater mass lift than any contemporary launch vehicle; 130 t offers greater lift than any launch vehicle ever. With 8.4m and 10m fairings, SLS will over greater volume lift capability than any other vehicle. center dot Initial ICPS configuration and future evolution will offer high C3 for beyond- Earth missions. SLS is currently on schedule for first launch in December 2017. Preliminary design completed in July 2013; SLS is now in implementation. Manufacture and testing are currently underway. Hardware now exists representing all SLS elements.

  18. Cryogenic Propulsion Stage (CPS) Configuration in Support of NASA's Multiple Design Reference Missions (DRMs)

    NASA Technical Reports Server (NTRS)

    Hanna, Stephen G.; Jones, David L.; Creech, Stephen D.; Lawrence, Thomas D.

    2012-01-01

    In support of the National Aeronautics and Space Administration's (NASA) Human Exploration and Operations Mission Directorate (HEOMD), the Space Launch System (SLS) is being designed for safe, affordable, and sustainable human and scientific exploration missions beyond Earth's or-bit (BEO). The SLS Team is tasked with developing a system capable of safely and repeatedly lofting a new fleet of spaceflight vehicles beyond Earth orbit. The Cryogenic Propulsion Stage (CPS) is a key enabler for evolving the SLS capability for BEO missions. This paper reports on the methodology and initial recommendations relative to the CPS, giving a brief retrospective of early studies on this promising propulsion hardware. This paper provides an overview of the requirements development and CPS configuration in support of NASA's multiple Design Reference Missions (DRMs).

  19. High-performance dynamic quantum clustering on graphics processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wittek, Peter, E-mail: peterwittek@acm.org

    2013-01-15

    Clustering methods in machine learning may benefit from borrowing metaphors from physics. Dynamic quantum clustering associates a Gaussian wave packet with the multidimensional data points and regards them as eigenfunctions of the Schroedinger equation. The clustering structure emerges by letting the system evolve and the visual nature of the algorithm has been shown to be useful in a range of applications. Furthermore, the method only uses matrix operations, which readily lend themselves to parallelization. In this paper, we develop an implementation on graphics hardware and investigate how this approach can accelerate the computations. We achieve a speedup of up tomore » two magnitudes over a multicore CPU implementation, which proves that quantum-like methods and acceleration by graphics processing units have a great relevance to machine learning.« less

  20. Industrial Inspection with Open Eyes: Advance with Machine Vision Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zheng; Ukida, H.; Niel, Kurt

    Machine vision systems have evolved significantly with the technology advances to tackle the challenges from modern manufacturing industry. A wide range of industrial inspection applications for quality control are benefiting from visual information captured by different types of cameras variously configured in a machine vision system. This chapter screens the state of the art in machine vision technologies in the light of hardware, software tools, and major algorithm advances for industrial inspection. The inspection beyond visual spectrum offers a significant complementary to the visual inspection. The combination with multiple technologies makes it possible for the inspection to achieve a bettermore » performance and efficiency in varied applications. The diversity of the applications demonstrates the great potential of machine vision systems for industry.« less

  1. Evolving Reliability and Maintainability Allocations for NASA Ground Systems

    NASA Technical Reports Server (NTRS)

    Munoz, Gisela; Toon, Troy; Toon, Jamie; Conner, Angelo C.; Adams, Timothy C.; Miranda, David J.

    2016-01-01

    This paper describes the methodology and value of modifying allocations to reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) program’s subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. This iterative process provided an opportunity for the reliability engineering team to reevaluate allocations as systems moved beyond their conceptual and preliminary design phases. These new allocations are based on updated designs and maintainability characteristics of the components. It was found that trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper discusses the results of reliability and maintainability reallocations made for the GSDO subsystems as the program nears the end of its design phase.

  2. Evolving Reliability and Maintainability Allocations for NASA Ground Systems

    NASA Technical Reports Server (NTRS)

    Munoz, Gisela; Toon, Jamie; Toon, Troy; Adams, Timothy C.; Miranda, David J.

    2016-01-01

    This paper describes the methodology that was developed to allocate reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) program's subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. Allocating is an iterative process; as systems moved beyond their conceptual and preliminary design phases this provided an opportunity for the reliability engineering team to reevaluate allocations based on updated designs and maintainability characteristics of the components. Trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper will discuss the value of modifying reliability and maintainability allocations made for the GSDO subsystems as the program nears the end of its design phase.

  3. Stress analysis and design considerations for Shuttle pointed autonomous research tool for astronomy /SPARTAN/

    NASA Technical Reports Server (NTRS)

    Ferragut, N. J.

    1982-01-01

    The Shuttle Pointed Autonomous Research Tool for Astronomy (SPARTAN) family of spacecraft are intended to operate with minimum interfaces with the U.S. Space Shuttle in order to increase flight opportunities. The SPARTAN I Spacecraft was designed to enhance structural capabilities and increase reliability. The approach followed results from work experience which evolved from sounding rocket projects. Structural models were developed to do the analyses necessary to satisfy safety requirements for Shuttle hardware. A loads analysis must also be performed. Stress analysis calculations will be performed on the main structural elements and subcomponents. Attention is given to design considerations and program definition, the schematic representation of a finite element model used for SPARTAN I spacecraft, details of loads analysis, the stress analysis, and fracture mechanics plan implications.

  4. A review of antimicrobial peptides and their therapeutic potential as anti-infective drugs.

    PubMed

    Gordon, Y Jerold; Romanowski, Eric G; McDermott, Alison M

    2005-07-01

    Antimicrobial peptides (AMPs) are an essential part of innate immunity that evolved in most living organisms over 2.6 billion years to combat microbial challenge. These small cationic peptides are multifunctional as effectors of innate immunity on skin and mucosal surfaces and have demonstrated direct antimicrobial activity against various bacteria, viruses, fungi, and parasites. This review summarizes their progress to date as commercial antimicrobial drugs for topical and systemic indications. Literature review. Despite numerous clinical trials, no modified AMP has obtained Food & Drug Administration approval yet for any topical or systemic medical indications. While AMPs are recognized as essential components of natural host innate immunity against microbial challenge, their usefulness as a new class of antimicrobial drugs still remains to be proven.

  5. Neutral Theory and Rapidly Evolving Viral Pathogens.

    PubMed

    Frost, Simon D W; Magalis, Brittany Rife; Kosakovsky Pond, Sergei L

    2018-06-01

    The evolution of viral pathogens is shaped by strong selective forces that are exerted during jumps to new hosts, confrontations with host immune responses and antiviral drugs, and numerous other processes. However, while undeniably strong and frequent, adaptive evolution is largely confined to small parts of information-packed viral genomes, and the majority of observed variation is effectively neutral. The predictions and implications of the neutral theory have proven immensely useful in this context, with applications spanning understanding within-host population structure, tracing the origins and spread of viral pathogens, predicting evolutionary dynamics, and modeling the emergence of drug resistance. We highlight the multiple ways in which the neutral theory has had an impact, which has been accelerated in the age of high-throughput, high-resolution genomics.

  6. Consistent visualizations of changing knowledge

    PubMed Central

    Tipney, Hannah J.; Schuyler, Ronald P.; Hunter, Lawrence

    2009-01-01

    Networks are increasingly used in biology to represent complex data in uncomplicated symbolic form. However, as biological knowledge is continually evolving, so must those networks representing this knowledge. Capturing and presenting this type of knowledge change over time is particularly challenging due to the intimate manner in which researchers customize those networks they come into contact with. The effective visualization of this knowledge is important as it creates insight into complex systems and stimulates hypothesis generation and biological discovery. Here we highlight how the retention of user customizations, and the collection and visualization of knowledge associated provenance supports effective and productive network exploration. We also present an extension of the Hanalyzer system, ReOrient, which supports network exploration and analysis in the presence of knowledge change. PMID:21347184

  7. On Polymorphic Circuits and Their Design Using Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Zebulum, Ricardo; Keymeulen, Didier; Lohn, Jason; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper introduces the concept of polymorphic electronics (polytronics) - referring to electronics with superimposed built-in functionality. A function change does not require switches/reconfiguration as in traditional approaches. Instead the change comes from modifications in the characteristics of devices involved in the circuit, in response to controls such as temperature, power supply voltage (VDD), control signals, light, etc. The paper illustrates polytronic circuits in which the control is done by temperature, morphing signals, and VDD respectively. Polytronic circuits are obtained by evolutionary design/evolvable hardware techniques. These techniques are ideal for the polytronics design, a new area that lacks design guidelines, know-how,- yet the requirements/objectives are easy to specify and test. The circuits are evolved/synthesized in two different modes. The first mode explores an unstructured space, in which transistors can be interconnected freely in any arrangement (in simulations only). The second mode uses a Field Programmable Transistor Array (FPTA) model, and the circuit topology is sought as a mapping onto a programmable architecture (these experiments are performed both in simulations and on FPTA chips). The experiments demonstrated the synthesis. of polytronic circuits by evolution. The capacity of storing/hiding "extra" functions provides for watermark/invisible functionality, thus polytronics may find uses in intelligence/security applications.

  8. Digital Transplantation Pathology: Combining Whole Slide Imaging, Multiplex Staining, and Automated Image Analysis

    PubMed Central

    Isse, Kumiko; Lesniak, Andrew; Grama, Kedar; Roysam, Badrinath; Minervini, Martha I.; Demetris, Anthony J

    2013-01-01

    Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. “-Omics” analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: a) spatial-temporal relationships; b) rare events/cells; c) complex structural context; and d) integration into a “systems” model. Nevertheless, except for immunostaining, no transformative advancements have “modernized” routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology - global “–omic” analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes. PMID:22053785

  9. Novel data visualizations of X-ray data for aviation security applications using the Open Threat Assessment Platform (OTAP)

    NASA Astrophysics Data System (ADS)

    Gittinger, Jaxon M.; Jimenez, Edward S.; Holswade, Erica A.; Nunna, Rahul S.

    2017-02-01

    This work will demonstrate the implementation of a traditional and non-traditional visualization of x-ray images for aviation security applications that will be feasible with open system architecture initiatives such as the Open Threat Assessment Platform (OTAP). Anomalies of interest to aviation security are fluid, where characteristic signals of anomalies of interest can evolve rapidly. OTAP is a limited scope open architecture baggage screening prototype that intends to allow 3rd-party vendors to develop and easily implement, integrate, and deploy detection algorithms and specialized hardware on a field deployable screening technology [13]. In this study, stereoscopic images were created using an unmodified, field-deployed system and rendered on the Oculus Rift, a commercial virtual reality video gaming headset. The example described in this work is not dependent on the Oculus Rift, and is possible using any comparable hardware configuration capable of rendering stereoscopic images. The depth information provided from viewing the images will aid in the detection of characteristic signals from anomalies of interest. If successful, OTAP has the potential to allow for aviation security to become more fluid in its adaptation to the evolution of anomalies of interest. This work demonstrates one example that is easily implemented using the OTAP platform, that could lead to the future generation of ATR algorithms and data visualization approaches.

  10. Current And Future Directions Of Lens Design Software

    NASA Astrophysics Data System (ADS)

    Gustafson, Darryl E.

    1983-10-01

    The most effective environment for doing lens design continues to evolve as new computer hardware and software tools become available. Important recent hardware developments include: Low-cost but powerful interactive multi-user 32 bit computers with virtual memory that are totally software-compatible with prior larger and more expensive members of the family. A rapidly growing variety of graphics devices for both hard-copy and screen graphics, including many with color capability. In addition, with optical design software readily accessible in many forms, optical design has become a part-time activity for a large number of engineers instead of being restricted to a small number of full-time specialists. A designer interface that is friendly for the part-time user while remaining efficient for the full-time designer is thus becoming more important as well as more practical. Along with these developments, software tools in other scientific and engineering disciplines are proliferating. Thus, the optical designer is less and less unique in his use of computer-aided techniques and faces the challenge and opportunity of efficiently communicating his designs to other computer-aided-design (CAD), computer-aided-manufacturing (CAM), structural, thermal, and mechanical software tools. This paper will address the impact of these developments on the current and future directions of the CODE VTM optical design software package, its implementation, and the resulting lens design environment.

  11. Online and Offline Pattern Recognition in PANDA

    NASA Astrophysics Data System (ADS)

    Boca, Gianluigi

    2016-11-01

    PANDA is one of the four experiments that will run at the new facility FAIR that is being built in Darmstadt, Germany. It is a fixed target experiment: a beam of antiprotons collides on a jet proton target (the maximum center of mass energy is 5.46 GeV). The interaction rate at the startup will be 2MHz with the goal of reaching 20MHz at full luminosity. The beam of antiprotons will be essentially continuous. PANDA will have NO hardware trigger but only a software trigger, to allow for maximum flexibility in the physics program. All those characteristics are severe challenges for the reconstruction code that 1) must be fast, since it has to be validated up to 20MHz interaction rate; 2) must be able to reject fake tracks caused by the remnant hits, belonging to previous or later events in some slow detectors, for example the straw tubes in the central region. The Pattern Recognition (PR) of PANDA will have to run both online to achieve a first fast selection, and offline, at lower rate, for a more refined selection. In PANDA the PR code is continuously evolving; this contribution shows the present status. I will give an overview of three examples of PR following different strategies and/or implemented on different hardware (FPGA, GPUs, CPUs) and, when available, I will report the performances.

  12. Fast 2D flood modelling using GPU technology - recent applications and new developments

    NASA Astrophysics Data System (ADS)

    Crossley, Amanda; Lamb, Rob; Waller, Simon; Dunning, Paul

    2010-05-01

    In recent years there has been considerable interest amongst scientists and engineers in exploiting the potential of commodity graphics hardware for desktop parallel computing. The Graphics Processing Units (GPUs) that are used in PC graphics cards have now evolved into powerful parallel co-processors that can be used to accelerate the numerical codes used for floodplain inundation modelling. We report in this paper on experience over the past two years in developing and applying two dimensional (2D) flood inundation models using GPUs to achieve significant practical performance benefits. Starting with a solution scheme for the 2D diffusion wave approximation to the 2D Shallow Water Equations (SWEs), we have demonstrated the capability to reduce model run times in ‘real-world' applications using GPU hardware and programming techniques. We then present results from a GPU-based 2D finite volume SWE solver. A series of numerical test cases demonstrate that the model produces outputs that are accurate and consistent with reference results published elsewhere. In comparisons conducted for a real world test case, the GPU-based SWE model was over 100 times faster than the CPU version. We conclude with some discussion of practical experience in using the GPU technology for flood mapping applications, and for research projects investigating use of Monte Carlo simulation methods for the analysis of uncertainty in 2D flood modelling.

  13. Cutting More than Metal: Breaking the Development Cycle

    NASA Technical Reports Server (NTRS)

    Singer, Chris

    2014-01-01

    New technology is changing the way we do business at NASA. The ability to use these new tools is made possible by a learning culture able to embrace innovation, flexibility, and prudent risk tolerance, while retaining the hard-won lessons learned of other successes and failures. Technologies such as 3-D manufacturing and structured light scanning are re-shaping the entire product life cycle, from design and analysis, through production, verification, logistics and operations. New fabrication techniques, verification techniques, integrated analysis, and models that follow the hardware from initial concept through operation are reducing the cost and time of building space hardware. Using these technologies to be more efficient, reliable and affordable requires we bring them to a level safe for NASA systems, maintain appropriate rigor in testing and acceptance, and transition new technology. Maximizing these technologies also requires cultural acceptance and understanding and balancing rules with creativity. Evolved systems engineering processes at NASA are increasingly more flexible than they have been in the past, enabling the implementation of new techniques and approaches. This paper provides an overview of NASA Marshall Space Flight Center's new approach to development, as well as examples of how that approach has been incorporated into NASA's Space Launch System (SLS) Program, which counts among its key tenants - safety, affordability, and sustainability. One of the 3D technologies that will be discussed in this paper is the design and testing of various rocket engine components.

  14. Vapor Compression Distillation Subsystem (VCDS) component enhancement, testing and expert fault diagnostics development, volume 1

    NASA Technical Reports Server (NTRS)

    Kovach, L. S.; Zdankiewicz, E. M.

    1987-01-01

    Vapor compression distillation technology for phase change recovery of potable water from wastewater has evolved as a technically mature approach for use aboard the Space Station. A program to parametrically test an advanced preprototype Vapor Compression Distillation Subsystem (VCDS) was completed during 1985 and 1986. In parallel with parametric testing, a hardware improvement program was initiated to test the feasibility of incorporating several key improvements into the advanced preprototype VCDS following initial parametric tests. Specific areas of improvement included long-life, self-lubricated bearings, a lightweight, highly-efficient compressor, and a long-life magnetic drive. With the exception of the self-lubricated bearings, these improvements are incorporated. The advanced preprototype VCDS was designed to reclaim 95 percent of the available wastewater at a nominal water recovery rate of 1.36 kg/h achieved at a solids concentration of 2.3 percent and 308 K condenser temperature. While this performance was maintained for the initial testing, a 300 percent improvement in water production rate with a corresponding lower specific energy was achieved following incorporation of the improvements. Testing involved the characterization of key VCDS performance factors as a function of recycle loop solids concentration, distillation unit temperature and fluids pump speed. The objective of this effort was to expand the VCDS data base to enable defining optimum performance characteristics for flight hardware development.

  15. Sensationalistic journalism and tales of snakebite: are rattlesnakes rapidly evolving more toxic venom?

    PubMed

    Hayes, William K; Mackessy, Stephen P

    2010-03-01

    Recent reports in the lay press have suggested that bites by rattlesnakes in the last several years have been more severe than those in the past. The explanation, often citing physicians, is that rattlesnakes are evolving more toxic venom, perhaps in response to anthropogenic causes. We suggest that other explanations are more parsimonious, including factors dependent on the snake and factors associated with the bite victim's response to envenomation. Although bites could become more severe from an increased proportion of bites from larger or more provoked snakes (ie, more venom injected), the venom itself evolves much too slowly to explain the severe symptoms occasionally seen. Increased snakebite severity could also result from a number of demographic changes in the victim profile, including age and body size, behavior toward the snake (provocation), anatomical site of bite, clothing, and general health including asthma prevalence and sensitivity to foreign antigens. Clinical management of bites also changes perpetually, rendering comparisons of snakebite severity over time tenuous. Clearly, careful study taking into consideration many factors will be essential to document temporal changes in snakebite severity or venom toxicity. Presently, no published evidence for these changes exists. The sensationalistic coverage of these atypical bites and accompanying speculation is highly misleading and can produce many detrimental results, such as inappropriate fear of the outdoors and snakes, and distraction from proven snakebite management needs, including a consistent supply of antivenom, adequate health care, and training. We urge healthcare providers to avoid propagating misinformation about snakes and snakebites. Copyright (c) 2010 Wilderness Medical Society. Published by Elsevier Inc. All rights reserved.

  16. A reliable, low-cost picture archiving and communications system for small and medium veterinary practices built using open-source technology.

    PubMed

    Iotti, Bryan; Valazza, Alberto

    2014-10-01

    Picture Archiving and Communications Systems (PACS) are the most needed system in a modern hospital. As an integral part of the Digital Imaging and Communications in Medicine (DICOM) standard, they are charged with the responsibility for secure storage and accessibility of the diagnostic imaging data. These machines need to offer high performance, stability, and security while proving reliable and ergonomic in the day-to-day and long-term storage and retrieval of the data they safeguard. This paper reports the experience of the authors in developing and installing a compact and low-cost solution based on open-source technologies in the Veterinary Teaching Hospital for the University of Torino, Italy, during the course of the summer of 2012. The PACS server was built on low-cost x86-based hardware and uses an open source operating system derived from Oracle OpenSolaris (Oracle Corporation, Redwood City, CA, USA) to host the DCM4CHEE PACS DICOM server (DCM4CHEE, http://www.dcm4che.org ). This solution features very high data security and an ergonomic interface to provide easy access to a large amount of imaging data. The system has been in active use for almost 2 years now and has proven to be a scalable, cost-effective solution for practices ranging from small to very large, where the use of different hardware combinations allows scaling to the different deployments, while the use of paravirtualization allows increased security and easy migrations and upgrades.

  17. Instruction-Level Characterization of Scientific Computing Applications Using Hardware Performance Counters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Y.; Cameron, K.W.

    1998-11-24

    Workload characterization has been proven an essential tool to architecture design and performance evaluation in both scientific and commercial computing areas. Traditional workload characterization techniques include FLOPS rate, cache miss ratios, CPI (cycles per instruction or IPC, instructions per cycle) etc. With the complexity of sophisticated modern superscalar microprocessors, these traditional characterization techniques are not powerful enough to pinpoint the performance bottleneck of an application on a specific microprocessor. They are also incapable of immediately demonstrating the potential performance benefit of any architectural or functional improvement in a new processor design. To solve these problems, many people rely on simulators,more » which have substantial constraints especially on large-scale scientific computing applications. This paper presents a new technique of characterizing applications at the instruction level using hardware performance counters. It has the advantage of collecting instruction-level characteristics in a few runs virtually without overhead or slowdown. A variety of instruction counts can be utilized to calculate some average abstract workload parameters corresponding to microprocessor pipelines or functional units. Based on the microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. In particular, the analysis results can provide some insight to the problem that only a small percentage of processor peak performance can be achieved even for many very cache-friendly codes. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. Eventually, these abstract parameters can lead to the creation of an analytical microprocessor pipeline model and memory hierarchy model.« less

  18. Design and Performance of the Multiplexed SQUID/TES Array at Ninety Gigahertz

    NASA Astrophysics Data System (ADS)

    Stanchfield, Sara; Ade, Peter; Aguirre, James; Brevik, Justus A.; Cho, Hsiao-Mei; Datta, Rahul; Devlin, Mark; Dicker, Simon R.; Dober, Bradley; Duff, Shannon M.; Egan, Dennis; Ford, Pam; Hilton, Gene; Hubmayr, Johannes; Irwin, Kent; Knowles, Kenda; Marganian, Paul; Mason, Brian Scott; Mates, John A. B.; McMahon, Jeff; Mello, Melinda; Mroczkowski, Tony; Romero, Charles; Sievers, Jonathon; Tucker, Carole; Vale, Leila R.; Vissers, Michael; White, Steven; Whitehead, Mark; Ullom, Joel; Young, Alexander

    2018-01-01

    We present the array performance and astronomical images from early science results from MUSTANG-2, a 90 GHz feedhorn-coupled, microwave SQUID-multiplexed TES bolometer array operating on the Robert C. Byrd Green Bank Telescope (GBT). MUSTANG-2 was installed on the GBT on December 2, 2016 and immediately began commissioning efforts, followed by science observations, which are expected to conclude June 2017. The feedhorn and waveguide-probe-coupled detector technology is a mature technology, which has been used on instrument including the South Pole Telescope, the Atacama Cosmology Telescope, and the Atacama B-mode Search telescope. The microwave SQUID readout system developed for MUSTANG-2 currently reads out 66 detectors with a single coaxial cable and will eventually allow thousands of detectors to be multiplexed. This microwave SQUID multiplexer combines the proven abilities of millimeterwave TES detectors with the multiplexing capabilities of KIDs with no degradation in noise performance of the detectors. Each multiplexing device is read out using warm electronics consisting of a commercially available ROACH board, a DAC/ADC card, and an Intermediate Frequency mixer circuit. The hardware was originally developed by the UC Berkeley Collaboration for Astronomy Signal Processing and Electronic Research (CASPER) group, whose primary goal is to develop scalable FPGA-based hardware with the flexibility to be used in a wide range of radio signal processing applications. MUSTANG-2 is the first on-sky instrument to use microwave SQUID multiplexing and is available as a shared-risk/PI instrument on the GBT. In MUSTANG-2's first season 7 separate proposals were awarded a total of 230 hours of telescope time.

  19. Software systems for operation, control, and monitoring of the EBEX instrument

    NASA Astrophysics Data System (ADS)

    Milligan, Michael; Ade, Peter; Aubin, François; Baccigalupi, Carlo; Bao, Chaoyun; Borrill, Julian; Cantalupo, Christopher; Chapman, Daniel; Didier, Joy; Dobbs, Matt; Grainger, Will; Hanany, Shaul; Hillbrand, Seth; Hubmayr, Johannes; Hyland, Peter; Jaffe, Andrew; Johnson, Bradley; Kisner, Theodore; Klein, Jeff; Korotkov, Andrei; Leach, Sam; Lee, Adrian; Levinson, Lorne; Limon, Michele; MacDermid, Kevin; Matsumura, Tomotake; Miller, Amber; Pascale, Enzo; Polsgrove, Daniel; Ponthieu, Nicolas; Raach, Kate; Reichborn-Kjennerud, Britt; Sagiv, Ilan; Tran, Huan; Tucker, Gregory S.; Vinokurov, Yury; Yadav, Amit; Zaldarriaga, Matias; Zilic, Kyle

    2010-07-01

    We present the hardware and software systems implementing autonomous operation, distributed real-time monitoring, and control for the EBEX instrument. EBEX is a NASA-funded balloon-borne microwave polarimeter designed for a 14 day Antarctic flight that circumnavigates the pole. To meet its science goals the EBEX instrument autonomously executes several tasks in parallel: it collects attitude data and maintains pointing control in order to adhere to an observing schedule; tunes and operates up to 1920 TES bolometers and 120 SQUID amplifiers controlled by as many as 30 embedded computers; coordinates and dispatches jobs across an onboard computer network to manage this detector readout system; logs over 3 GiB/hour of science and housekeeping data to an onboard disk storage array; responds to a variety of commands and exogenous events; and downlinks multiple heterogeneous data streams representing a selected subset of the total logged data. Most of the systems implementing these functions have been tested during a recent engineering flight of the payload, and have proven to meet the target requirements. The EBEX ground segment couples uplink and downlink hardware to a client-server software stack, enabling real-time monitoring and command responsibility to be distributed across the public internet or other standard computer networks. Using the emerging dirfile standard as a uniform intermediate data format, a variety of front end programs provide access to different components and views of the downlinked data products. This distributed architecture was demonstrated operating across multiple widely dispersed sites prior to and during the EBEX engineering flight.

  20. Light weight portable operator control unit using an Android-enabled mobile phone

    NASA Astrophysics Data System (ADS)

    Fung, Nicholas

    2011-05-01

    There have been large gains in the field of robotics, both in hardware sophistication and technical capabilities. However, as more capable robots have been developed and introduced to battlefield environments, the problem of interfacing with human controllers has proven to be challenging. Particularly in the field of military applications, controller requirements can be stringent and can range from size and power consumption, to durability and cost. Traditional operator control units (OCUs) tend to resemble laptop personal computers (PCs), as these devices are mobile and have ample computing power. However, laptop PCs are bulky and have greater power requirements. To approach this problem, a light weight, inexpensive controller was created based on a mobile phone running the Android operating system. It was designed to control an iRobot Packbot through the Army Research Laboratory (ARL) in-house Agile Computing Infrastructure (ACI). The hardware capabilities of the mobile phone, such as Wi- Fi communications, touch screen interface, and the flexibility of the Android operating system, made it a compelling platform. The Android based OCU offers a more portable package and can be easily carried by a soldier along with normal gear requirements. In addition, the one hand operation of the Android OCU allows for the Soldier to keep an unoccupied hand for greater flexibility. To validate the Android OCU as a capable controller, experimental data was collected evaluating use of the controller and a traditional, tablet PC based OCU. Initial analysis suggests that the Android OCU performed positively in qualitative data collected from participants.

  1. A comprehensive data acquisition and management system for an ecosystem-scale peatland warming and elevated CO2 experiment

    NASA Astrophysics Data System (ADS)

    Krassovski, M. B.; Riggs, J. S.; Hook, L. A.; Nettles, W. R.; Hanson, P. J.; Boden, T. A.

    2015-07-01

    Ecosystem-scale manipulation experiments represent large science investments that require well-designed data acquisition and management systems to provide reliable, accurate information to project participants and third party users. The SPRUCE Project (Spruce and Peatland Responses Under Climatic and Environmental Change, http://mnspruce.ornl.gov) is such an experiment funded by the Department of Energy's (DOE), Office of Science, Terrestrial Ecosystem Science (TES) Program. The SPRUCE experimental mission is to assess ecosystem-level biological responses of vulnerable, high carbon terrestrial ecosystems to a range of climate warming manipulations and an elevated CO2 atmosphere. SPRUCE provides a platform for testing mechanisms controlling the vulnerability of organisms, biogeochemical processes, and ecosystems to climatic change (e.g., thresholds for organism decline or mortality, limitations to regeneration, biogeochemical limitations to productivity, the cycling and release of CO2 and CH4 to the atmosphere). The SPRUCE experiment will generate a wide range of continuous and discrete measurements. To successfully manage SPRUCE data collection, achieve SPRUCE science objectives, and support broader climate change research, the research staff has designed a flexible data system using proven network technologies and software components. The primary SPRUCE data system components are: 1. Data acquisition and control system - set of hardware and software to retrieve biological and engineering data from sensors, collect sensor status information, and distribute feedback to control components. 2. Data collection system - set of hardware and software to deliver data to a central depository for storage and further processing. 3. Data management plan - set of plans, policies, and practices to control consistency, protect data integrity, and deliver data. This publication presents our approach to meeting the challenges of designing and constructing an efficient data system for managing high volume sources of in-situ observations in a remote, harsh environmental location. The approach covers data flow starting from the sensors and ending at the archival/distribution points, discusses types of hardware and software used, examines design considerations that were used to choose them, and describes the data management practices chosen to control and enhance the value of the data.

  2. A comprehensive data acquisition and management system for an ecosystem-scale peatland warming and elevated CO2 experiment

    NASA Astrophysics Data System (ADS)

    Krassovski, M. B.; Riggs, J. S.; Hook, L. A.; Nettles, W. R.; Hanson, P. J.; Boden, T. A.

    2015-11-01

    Ecosystem-scale manipulation experiments represent large science investments that require well-designed data acquisition and management systems to provide reliable, accurate information to project participants and third party users. The SPRUCE project (Spruce and Peatland Responses Under Climatic and Environmental Change, http://mnspruce.ornl.gov) is such an experiment funded by the Department of Energy's (DOE), Office of Science, Terrestrial Ecosystem Science (TES) Program. The SPRUCE experimental mission is to assess ecosystem-level biological responses of vulnerable, high carbon terrestrial ecosystems to a range of climate warming manipulations and an elevated CO2 atmosphere. SPRUCE provides a platform for testing mechanisms controlling the vulnerability of organisms, biogeochemical processes, and ecosystems to climatic change (e.g., thresholds for organism decline or mortality, limitations to regeneration, biogeochemical limitations to productivity, and the cycling and release of CO2 and CH4 to the atmosphere). The SPRUCE experiment will generate a wide range of continuous and discrete measurements. To successfully manage SPRUCE data collection, achieve SPRUCE science objectives, and support broader climate change research, the research staff has designed a flexible data system using proven network technologies and software components. The primary SPRUCE data system components are the following: 1. data acquisition and control system - set of hardware and software to retrieve biological and engineering data from sensors, collect sensor status information, and distribute feedback to control components; 2. data collection system - set of hardware and software to deliver data to a central depository for storage and further processing; 3. data management plan - set of plans, policies, and practices to control consistency, protect data integrity, and deliver data. This publication presents our approach to meeting the challenges of designing and constructing an efficient data system for managing high volume sources of in situ observations in a remote, harsh environmental location. The approach covers data flow starting from the sensors and ending at the archival/distribution points, discusses types of hardware and software used, examines design considerations that were used to choose them, and describes the data management practices chosen to control and enhance the value of the data.

  3. Inexpensive, Low Power, Open-Source Data Logging in the Field

    NASA Astrophysics Data System (ADS)

    Sandell, C. T.; Wickert, A. D.

    2016-12-01

    Collecting a robust data set of environmental conditions with commercial equipment is often cost prohibitive. I present the ALog, a general-purpose, inexpensive, low-power, open-source data logger that has proven its durability on long-term deployments in the harsh conditions of high altitude glaciers and humid river deltas. The ALog was developed to fill the need for a capable, rugged, easy-to-use, inexpensive, open-source hardware targeted at long-term remote deployment in nearly any environment. Building on the popular Arduino platform, the hardware features a high-precision clock, full size SD card slot for high-volume data storage, screw terminals, six analog inputs, two digital inputs, one digital interrupt, 3.3V and 5V power outputs, and SPI and I2C communication capability. The design is focused on extremely low power consumption allowing the Alog to be deployed for years on a single set of common alkaline batteries. The power efficiency of the Alog eliminates the difficulties associated with field power collection including additional hardware and installation costs, dependence on weather conditions, possible equipment failure, and the transport of bulky/heavy equipment to a remote site. Battery power increases suitable data collection sites (too shaded for photovoltaics) and allows for low profile installation options (including underground). The ALog has gone through continuous development with over four years of successful data collection in hydrologic field research. Over this time, software support for a wide range of sensors has been made available such as ultrasonic rangefinders (for water level, snow accumulation and glacial melt), temperature sensors (air and groundwater), humidity sensors, pyranometers, inclinometers, rain gauges, soil moisture and water potential sensors, resistance-based tools to measure frost heave, and cameras that trigger on events. The software developed for use with the ALog allows simple integration of established commercial sensors, including example implementation code so users with limited programming knowledge can get up and running with ease. All development files including design schematics, circuit board layouts, and source code files are open-source to further eliminate barriers to its use and allow community development contribution.

  4. Quantum key distribution with an efficient countermeasure against correlated intensity fluctuations in optical pulses

    NASA Astrophysics Data System (ADS)

    Yoshino, Ken-ichiro; Fujiwara, Mikio; Nakata, Kensuke; Sumiya, Tatsuya; Sasaki, Toshihiko; Takeoka, Masahiro; Sasaki, Masahide; Tajima, Akio; Koashi, Masato; Tomita, Akihisa

    2018-03-01

    Quantum key distribution (QKD) allows two distant parties to share secret keys with the proven security even in the presence of an eavesdropper with unbounded computational power. Recently, GHz-clock decoy QKD systems have been realized by employing ultrafast optical communication devices. However, security loopholes of high-speed systems have not been fully explored yet. Here we point out a security loophole at the transmitter of the GHz-clock QKD, which is a common problem in high-speed QKD systems using practical band-width limited devices. We experimentally observe the inter-pulse intensity correlation and modulation pattern-dependent intensity deviation in a practical high-speed QKD system. Such correlation violates the assumption of most security theories. We also provide its countermeasure which does not require significant changes of hardware and can generate keys secure over 100 km fiber transmission. Our countermeasure is simple, effective and applicable to wide range of high-speed QKD systems, and thus paves the way to realize ultrafast and security-certified commercial QKD systems.

  5. Predictive Modeling of Cardiac Ischemia

    NASA Technical Reports Server (NTRS)

    Anderson, Gary T.

    1996-01-01

    The goal of the Contextual Alarms Management System (CALMS) project is to develop sophisticated models to predict the onset of clinical cardiac ischemia before it occurs. The system will continuously monitor cardiac patients and set off an alarm when they appear about to suffer an ischemic episode. The models take as inputs information from patient history and combine it with continuously updated information extracted from blood pressure, oxygen saturation and ECG lines. Expert system, statistical, neural network and rough set methodologies are then used to forecast the onset of clinical ischemia before it transpires, thus allowing early intervention aimed at preventing morbid complications from occurring. The models will differ from previous attempts by including combinations of continuous and discrete inputs. A commercial medical instrumentation and software company has invested funds in the project with a goal of commercialization of the technology. The end product will be a system that analyzes physiologic parameters and produces an alarm when myocardial ischemia is present. If proven feasible, a CALMS-based system will be added to existing heart monitoring hardware.

  6. The new VLT-DSM M2 unit: construction and electromechanical testing

    NASA Astrophysics Data System (ADS)

    Gallieni, Daniele; Biasi, Roberto

    2013-12-01

    We present the design, construction and validation of the new M2 unit of the VLT Deformable Secondary Mirror. In the framework of the Adaptive Optics Facility program, ADS and Microgate designed a new secondary unit which replaces the current Dornier one. The M2 is composed by the mechanical structure, a new hexapod positioner and the Deformable Secondary Mirror unit.The DSM is based on the well proven contactless, voice coil motor technology that has been already successfully implemented in the MMT, LBT and Magellan adaptive secondaries, and is considered a promising technical choice for the E-ELT M4 and the GMT ASM. The VLT adaptive unit has been fully integrated and, before starting the optical calibration, has completed the electromechanical characterization, focused on the dynamic performance. With respect to the previous units we introduced several improvements, both in hardware and control architecture that allowed achieving a significant enhancement of the system dynamics and reduction of power consumption.

  7. Autonomous docking system for space structures and satellites

    NASA Astrophysics Data System (ADS)

    Prasad, Guru; Tajudeen, Eddie; Spenser, James

    2005-05-01

    Aximetric proposes Distributed Command and Control (C2) architecture for autonomous on-orbit assembly in space with our unique vision and sensor driven docking mechanism. Aximetric is currently working on ip based distributed control strategies, docking/mating plate, alignment and latching mechanism, umbilical structure/cord designs, and hardware/software in a closed loop architecture for smart autonomous demonstration utilizing proven developments in sensor and docking technology. These technologies can be effectively applied to many transferring/conveying and on-orbit servicing applications to include the capturing and coupling of space bound vehicles and components. The autonomous system will be a "smart" system that will incorporate a vision system used for identifying, tracking, locating and mating the transferring device to the receiving device. A robustly designed coupler for the transfer of the fuel will be integrated. Advanced sealing technology will be utilized for isolation and purging of resulting cavities from the mating process and/or from the incorporation of other electrical and data acquisition devices used as part of the overall smart system.

  8. Experimental results in autonomous landing approaches by dynamic machine vision

    NASA Astrophysics Data System (ADS)

    Dickmanns, Ernst D.; Werner, Stefan; Kraus, S.; Schell, R.

    1994-07-01

    The 4-D approach to dynamic machine vision, exploiting full spatio-temporal models of the process to be controlled, has been applied to on board autonomous landing approaches of aircraft. Aside from image sequence processing, for which it was developed initially, it is also used for data fusion from a range of sensors. By prediction error feedback an internal representation of the aircraft state relative to the runway in 3-D space and time is servo- maintained in the interpretation process, from which the control applications required are being derived. The validity and efficiency of the approach have been proven both in hardware- in-the-loop simulations and in flight experiments with a twin turboprop aircraft Do128 under perturbations from cross winds and wind gusts. The software package has been ported to `C' and onto a new transputer image processing platform; the system has been expanded for bifocal vision with two cameras of different focal length mounted fixed relative to each other on a two-axes platform for viewing direction control.

  9. Spacecraft design project multipurpose satellite bus MPS

    NASA Technical Reports Server (NTRS)

    Kellman, Lyle; Riley, John; Szostak, Michael; Watkins, Joseph; Willhelm, Joseph; Yale, Gary

    1990-01-01

    The thrust of this project was to design not a single spacecraft, but to design a multimission bus capable of supporting several current payloads and unnamed, unspecified future payloads. Spiraling costs of spacecraft and shrinking defense budgets necessitated a fresh look at the feasibility of a multimission spacecraft bus. The design team chose two very diverse and different payloads, and along with them two vastly different orbits, to show that multimission spacecraft buses are an area where indeed more research and effort needs to be made. Tradeoffs, of course, were made throughout the design, but optimization of subsystem components limited weight and volume penalties, performance degradation, and reliability concerns. Simplicity was chosen over more complex, sophisticated and usually more efficient designs. Cost of individual subsystem components was not a primary concern in the design phase, but every effort was made to chose flight tested and flight proven hardware. Significant cost savings could be realized if a standard spacecraft bus was indeed designed and purchased in finite quantities.

  10. The Technology Information Environment with Industry{trademark} system description

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Detry, R.; Machin, G.

    The Technology Information Environment with Industry (TIE-In{trademark}) provides users with controlled access to distributed laboratory resources that are packaged in intelligent user interfaces. These interfaces help users access resources without requiring the user to have technical or computer expertise. TIE-In utilizes existing, proven technologies such as the Kerberos authentication system, X-Windows, and UNIX sockets. A Front End System (FES) authenticates users and allows them to register for resources and subsequently access them. The FES also stores status and accounting information, and provides an automated method for the resource owners to recover costs from users. The resources available through TIE-In aremore » typically laboratory-developed applications that are used to help design, analyze, and test components in the nation`s nuclear stockpile. Many of these applications can also be used by US companies for non-weapons-related work. TIE-In allows these industry partners to obtain laboratory-developed technical solutions without requiring them to duplicate the technical resources (people, hardware, and software) at Sandia.« less

  11. SmallSat Innovations for Planetary Science

    NASA Astrophysics Data System (ADS)

    Weinberg, Jonathan; Petroy, Shelley; Roark, Shane; Schindhelm, Eric

    2017-10-01

    As NASA continues to look for ways to fly smaller planetary missions such as SIMPLEX, MoO, and Venus Bridge, it is important that spacecraft and instrument capabilities keep pace to allow these missions to move forward. As spacecraft become smaller, it is necessary to balance size with capability, reliability and payload capacity. Ball Aerospace offers extensive SmallSat capabilities matured over the past decade, utilizing our broad experience developing mission architecture, assembling spacecraft and instruments, and testing advanced enabling technologies. Ball SmallSats inherit their software capabilities from the flight proven Ball Configurable Platform (BCP) line of spacecraft, and may be tailored to meet the unique requirements of Planetary Science missions. We present here recent efforts in pioneering both instrument miniaturization and SmallSat/sensorcraft development through mission design and implementation. Ball has flown several missions with small, but capable spacecraft. We also have demonstrated a variety of enhanced spacecraft/instrument capabilities in the laboratory and in flight to advance autonomy in spaceflight hardware that can enable some small planetary missions.

  12. HARMONI instrument control electronics

    NASA Astrophysics Data System (ADS)

    Gigante, José V.; Rodríguez Ramos, Luis F.; Zins, Gerard; Schnetler, Hermine; Pecontal, Arlette; Herreros, José Miguel; Clarke, Fraser; Bryson, Ian; Thatte, Niranjan

    2014-07-01

    HARMONI is an integral field spectrograph working at visible and near-infrared wavelengths over a range of spatial scales from ground layer corrected to fully diffraction-limited. The instrument has been chosen to be part of the first-light complement at the European Extremely Large Telescope (E-ELT). This paper describes the instrument control electronics to be developed at IAC. The large size of the HARMONI instrument, its cryogenic operation, and the fact that it must operate with enhanced reliability is a challenge from the point of view of the control electronics design. The present paper describes a design proposal based on the current instrument requirements and intended to be fully compliant with the ESO E-ELT standards, as well as with the European EMC and safety standards. The modularity of the design and the use of COTS standard hardware will benefit the project in several aspects, as reduced costs, shorter schedule by the use of commercially available components, and improved quality by the use of well proven solutions.

  13. The development and testing of a regenerable CO2 and humidity control system for Shuttle

    NASA Technical Reports Server (NTRS)

    Boehm, A. M.

    1977-01-01

    A regenerable CO2 and humidity control system is presently being developed for potential use on Shuttle as an alternate to the baseline lithium hydroxide (LiOH) system. The system utilizes a sorbent material (designated 'HS-C') to adsorb CO2 and water vapor from the cabin atmosphere and desorb the CO2 and water vapor overboard when exposed to a space vacuum. Continuous operation is achieved by utilizing two beds which are alternately cycled between adsorption and desorption. This paper presents the significant hardware development and test accomplishments of the past year. A half-size breadboard system utilizing a flight configuration canister was successfully performance tested in simulated Shuttle missions. A vacuum desorption test provided considerable insight into the desorption phenomena and allowed a significant reduction of the Shuttle vacuum duct size. The fabrication and testing of a flight prototype canister and flight prototype vacuum valves have proven the feasibility of these full-size, flight-weight components.

  14. Low-cost telepresence for collaborative virtual environments.

    PubMed

    Rhee, Seon-Min; Ziegler, Remo; Park, Jiyoung; Naef, Martin; Gross, Markus; Kim, Myoung-Hee

    2007-01-01

    We present a novel low-cost method for visual communication and telepresence in a CAVE -like environment, relying on 2D stereo-based video avatars. The system combines a selection of proven efficient algorithms and approximations in a unique way, resulting in a convincing stereoscopic real-time representation of a remote user acquired in a spatially immersive display. The system was designed to extend existing projection systems with acquisition capabilities requiring minimal hardware modifications and cost. The system uses infrared-based image segmentation to enable concurrent acquisition and projection in an immersive environment without a static background. The system consists of two color cameras and two additional b/w cameras used for segmentation in the near-IR spectrum. There is no need for special optics as the mask and color image are merged using image-warping based on a depth estimation. The resulting stereo image stream is compressed, streamed across a network, and displayed as a frame-sequential stereo texture on a billboard in the remote virtual environment.

  15. A New Sputnik Surprise?

    NASA Technical Reports Server (NTRS)

    Lowman, Paul D., Jr.; Smith, David E. (Technical Monitor)

    2001-01-01

    This paper suggests that a new "Sputnik surprise" in the form of a joint Chinese-Russian lunar base program may emerge in this decade. The Moon as a whole has been shown to be territory of strategic value, with discovery of large amounts of hydrogen (probably water ice) at the lunar poles and helium 3 everywhere in the soil, in addition to the Moon's scientific value as an object of study and as a platform for astronomy. There is thus good reason for a return to the Moon, robotically or manned. Relations between China and Russia have thawed since the mid-1990s, and the two countries have a formal space cooperation pact. It is argued here that a manned lunar program would be feasible within 5 years, using modern technology and proven spacecraft and launch vehicles. The combination of Russian lunar hardware with Chinese space technology would permit the two countries together to take the lead in solar system exploration in the 21st century.

  16. New cardiac cameras: single-photon emission CT and PET.

    PubMed

    Slomka, Piotr J; Berman, Daniel S; Germano, Guido

    2014-07-01

    Nuclear cardiology instrumentation has evolved significantly in the recent years. Concerns about radiation dose and long acquisition times have propelled developments of dedicated high-efficiency cardiac SPECT scanners. Novel collimator designs, such as multipinhole or locally focusing collimators arranged in geometries that are optimized for cardiac imaging, have been implemented to enhance photon-detection sensitivity. Some of these new SPECT scanners use solid-state photon detectors instead of photomultipliers to improve image quality and to reduce the scanner footprint. These new SPECT devices allow dramatic up to 7-fold reduction in acquisition times or similar reduction in radiation dose. In addition, new hardware for photon attenuation correction allowing ultralow radiation doses has been offered by some vendors. To mitigate photon attenuation artifacts for the new SPECT scanners not equipped with attenuation correction hardware, 2-position (upright-supine or prone-supine) imaging has been proposed. PET hardware developments have been primarily driven by the requirements of oncologic imaging, but cardiac imaging can benefit from improved PET image quality and improved sensitivity of 3D systems. The time-of-flight reconstruction combined with resolution recovery techniques is now implemented by all major PET vendors. These new methods improve image contrast and image resolution and reduce image noise. High-sensitivity 3D PET without interplane septa allows reduced radiation dose for cardiac perfusion imaging. Simultaneous PET/MR hybrid system has been developed. Solid-state PET detectors with avalanche photodiodes or digital silicon photomultipliers have been introduced, and they offer improved imaging characteristics and reduced sensitivity to electromagnetic MR fields. Higher maximum count rate of the new PET detectors allows routine first-pass Rb-82 imaging, with 3D PET acquisition enabling clinical utilization of dynamic imaging with myocardial flow measurements for this tracer. The availability of high-end CT component in most PET/CT configurations enables hybrid multimodality cardiac imaging protocols with calcium scoring or CT angiography or both. Copyright © 2014. Published by Elsevier Inc.

  17. Evolution of the Hubble Space Telescope Safing Systems

    NASA Technical Reports Server (NTRS)

    Pepe, Joyce; Myslinski, Michael

    2006-01-01

    The Hubble Space Telescope (HST) was launched on April 24 1990, with an expected lifespan of 15 years. Central to the spacecraft design was the concept of a series of on-orbit shuttle servicing missions permitting astronauts to replace failed equipment, update the scientific instruments and keep the HST at the forefront of astronomical discoveries. One key to the success of the Hubble mission has been the robust Safing systems designed to monitor the performance of the observatory and to react to keep the spacecraft safe in the event of equipment anomaly. The spacecraft Safing System consists of a range of software tests in the primary flight computer that evaluate the performance of mission critical hardware, safe modes that are activated when the primary control mode is deemed inadequate for protecting the vehicle, and special actions that the computer can take to autonomously reconfigure critical hardware. The HST Safing System was structured to autonomously detect electrical power system, data management system, and pointing control system malfunctions and to configure the vehicle to ensure safe operation without ground intervention for up to 72 hours. There is also a dedicated safe mode computer that constantly monitors a keep-alive signal from the primary computer. If this signal stops, the safe mode computer shuts down the primary computer and takes over control of the vehicle, putting it into a safe, low-power configuration. The HST Safing system has continued to evolve as equipment has aged, as new hardware has been installed on the vehicle, and as the operation modes have matured during the mission. Along with the continual refinement of the limits used in the safing tests, several new tests have been added to the monitoring system, and new safe modes have been added to the flight software. This paper will focus on the evolution of the HST Safing System and Safing tests, and the importance of this evolution to prolonging the science operations of the telescope.

  18. Space shuttle low cost/risk avionics study

    NASA Technical Reports Server (NTRS)

    1971-01-01

    All work breakdown structure elements containing any avionics related effort were examined for pricing the life cycle costs. The analytical, testing, and integration efforts are included for the basic onboard avionics and electrical power systems. The design and procurement of special test equipment and maintenance and repair equipment are considered. Program management associated with these efforts is described. Flight test spares and labor and materials associated with the operations and maintenance of the avionics systems throughout the horizontal flight test are examined. It was determined that cost savings can be achieved by using existing hardware, maximizing orbiter-booster commonality, specifying new equipments to MIL quality standards, basing redundancy on cost effective analysis, minimizing software complexity and reducing cross strapping and computer-managed functions, utilizing compilers and floating point computers, and evolving the design as dictated by the horizontal flight test schedules.

  19. The CECAM Electronic Structure Library: community-driven development of software libraries for electronic structure simulations

    NASA Astrophysics Data System (ADS)

    Oliveira, Micael

    The CECAM Electronic Structure Library (ESL) is a community-driven effort to segregate shared pieces of software as libraries that could be contributed and used by the community. Besides allowing to share the burden of developing and maintaining complex pieces of software, these can also become a target for re-coding by software engineers as hardware evolves, ensuring that electronic structure codes remain at the forefront of HPC trends. In a series of workshops hosted at the CECAM HQ in Lausanne, the tools and infrastructure for the project were prepared, and the first contributions were included and made available online (http://esl.cecam.org). In this talk I will present the different aspects and aims of the ESL and how these can be useful for the electronic structure community.

  20. A concept to standardize raw biosignal transmission for brain-computer interfaces.

    PubMed

    Breitwieser, Christian; Neuper, Christa; Müller-Putz, Gernot R

    2011-01-01

    With this concept we introduced the attempt of a standardized interface called TiA to transmit raw biosignals. TiA is able to deal with multirate and block-oriented data transmission. Data is distinguished by different signal types (e.g., EEG, EOG, NIRS, …), whereby those signals can be acquired at the same time from different acquisition devices. TiA is built as a client-server model. Multiple clients can connect to one server. Information is exchanged via a control- and a separated data connection. Control commands and meta information are transmitted over the control connection. Raw biosignal data is delivered using the data connection in a unidirectional way. For this purpose a standardized handshaking protocol and raw data packet have been developed. Thus, an abstraction layer between hardware devices and data processing was evolved facilitating standardization.

  1. NASA IVHM Technology Experiment for X-vehicles (NITEX)

    NASA Technical Reports Server (NTRS)

    Sandra, Hayden; Bajwa, Anupa

    2001-01-01

    The purpose of the NASA IVHM Technology Experiment for X-vehicles (NITEX) is to advance the development of selected IVHM technologies in a flight environment and to demonstrate the potential for reusable launch vehicle ground processing savings. The technologies to be developed and demonstrated include system-level and detailed diagnostics for real-time fault detection and isolation, prognostics for fault prediction, automated maintenance planning based on diagnostic and prognostic results, and a microelectronics hardware platform. Complete flight The Evolution of Flexible Insulation as IVHM consists of advanced sensors, distributed data acquisition, data processing that includes model-based diagnostics, prognostics and vehicle autonomy for control or suggested action, and advanced data storage. Complete ground IVHM consists of evolved control room architectures, advanced applications including automated maintenance planning and automated ground support equipment. This experiment will advance the development of a subset of complete IVHM.

  2. A synopsis of test results and knowledge gained from the Phase-0 CSI evolutionary model

    NASA Technical Reports Server (NTRS)

    Belvin, W. Keith; Elliott, Kenny B.; Horta, Lucas G.

    1993-01-01

    The Phase-0 CSI Evolutionary Model (CEM) is a testbed for the study of space platform global line-of-sight (LOS) pointing. Now that the tests have been completed, a summary of hardware and closed-loop test experiences is necessary to insure a timely dissemination of the knowledge gained. The testbed is described and modeling experiences are presented followed by a summary of the research performed by various investigators. Some early lessons on implementing the closed-loop controllers are described with particular emphasis on real-time computing requirements. A summary of closed-loop studies and a synopsis of test results are presented. Plans for evolving the CEM from phase 0 to phases 1 and 2 are also described. Subsequently, a summary of knowledge gained from the design and testing of the Phase-0 CEM is made.

  3. KSC-06pd0755

    NASA Image and Video Library

    2006-04-28

    VANDENBERG AIR FORCE BASE, CALIF. - CloudSat and CALIPSO ¯ Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations ¯ thunders skyward after launch at approximately 6:02 a.m. EDT atop a Boeing Delta II rocket. The two satellites will eventually circle approximately 438 miles above Earth in a sun-synchronous polar orbit, which means they will always cross the equator at the same local time. Their technologies will enable scientists to study how clouds and aerosols form, evolve and interact. CloudSat is managed by NASA's Jet Propulsion Laboratory, in Pasadena, Calif. JPL developed the radar instrument with hardware contributions from the Canadian Space Agency. CALIPSO is collaboration between NASA and France's Centre National d'Etudes Spatiales (CNES). Langley Research Center, in Hampton, Va., is leading the CALIPSO mission and providing overall project management, systems engineering, and payload mission operations. Photo credit: Boeing/Thom Baur

  4. An evaluation of software tools for the design and development of cockpit displays

    NASA Technical Reports Server (NTRS)

    Ellis, Thomas D., Jr.

    1993-01-01

    The use of all-glass cockpits at the NASA Langley Research Center (LaRC) simulation facility has changed the means of design, development, and maintenance of instrument displays. The human-machine interface has evolved from a physical hardware device to a software-generated electronic display system. This has subsequently caused an increased workload at the facility. As computer processing power increases and the glass cockpit becomes predominant in facilities, software tools used in the design and development of cockpit displays are becoming both feasible and necessary for a more productive simulation environment. This paper defines LaRC requirements of a display software development tool and compares two available applications against these requirements. As a part of the software engineering process, these tools reduce development time, provide a common platform for display development, and produce exceptional real-time results.

  5. Best practices in passive remote sensing VNIR hyperspectral system hardware calibrations

    USGS Publications Warehouse

    Jablonski, Joseph; Durell, Christopher; Slonecker, Terry; Wong, Kwok; Simon, Blair; Eichelberger, Andrew; Osterberg, Jacob

    2016-01-01

    Hyperspectral imaging (HSI) is an exciting and rapidly expanding area of instruments and technology in passive remote sensing. Due to quickly changing applications, the instruments are evolving to suit new uses and there is a need for consistent definition, testing, characterization and calibration. This paper seeks to outline a broad prescription and recommendations for basic specification, testing and characterization that must be done on Visible Near Infra-Red grating-based sensors in order to provide calibrated absolute output and performance or at least relative performance that will suit the user’s task. The primary goal of this paper is to provide awareness of the issues with performance of this technology and make recommendations towards standards and protocols that could be used for further efforts in emerging procedures for national laboratory and standards groups.

  6. Fluorocarbon Contamination from the Drill on the Mars Science Laboratory: Potential Science Impact on Detecting Martian Organics by Sample Analysis at Mars (SAM)

    NASA Technical Reports Server (NTRS)

    Eigenbrode, J. L.; McAdam, A.; Franz, H.; Freissinet, C.; Bower, H.; Floyd, M.; Conrad, P.; Mahaffy, P.; Feldman, J.; Hurowitz, J.; hide

    2013-01-01

    Polytetrafluoroethylene (PTFE or trade name: Teflon by Dupont Co.) has been detected in rocks drilled during terrestrial testing of the Mars Science Laboratory (MSL) drilling hardware. The PTFE in sediments is a wear product of the seals used in the Drill Bit Assemblies (DBAs). It is expected that the drill assembly on the MSL flight model will also shed Teflon particles into drilled samples. One of the primary goals of the Sample Analysis at Mars (SAM) instrument suite on MSL is to test for the presence of martian organics in samples. Complications introduced by the potential presence of PTFE in drilled samples to the SAM evolved gas analysis (EGA or pyrolysisquadrupole mass spectrometry, pyr-QMS) and pyrolysis- gas chromatography mass spectrometry (Pyr- GCMS) experiments was investigated.

  7. Back to the future: virtualization of the computing environment at the W. M. Keck Observatory

    NASA Astrophysics Data System (ADS)

    McCann, Kevin L.; Birch, Denny A.; Holt, Jennifer M.; Randolph, William B.; Ward, Josephine A.

    2014-07-01

    Over its two decades of science operations, the W.M. Keck Observatory computing environment has evolved to contain a distributed hybrid mix of hundreds of servers, desktops and laptops of multiple different hardware platforms, O/S versions and vintages. Supporting the growing computing capabilities to meet the observatory's diverse, evolving computing demands within fixed budget constraints, presents many challenges. This paper describes the significant role that virtualization is playing in addressing these challenges while improving the level and quality of service as well as realizing significant savings across many cost areas. Starting in December 2012, the observatory embarked on an ambitious plan to incrementally test and deploy a migration to virtualized platforms to address a broad range of specific opportunities. Implementation to date has been surprisingly glitch free, progressing well and yielding tangible benefits much faster than many expected. We describe here the general approach, starting with the initial identification of some low hanging fruit which also provided opportunity to gain experience and build confidence among both the implementation team and the user community. We describe the range of challenges, opportunities and cost savings potential. Very significant among these was the substantial power savings which resulted in strong broad support for moving forward. We go on to describe the phasing plan, the evolving scalable architecture, some of the specific technical choices, as well as some of the individual technical issues encountered along the way. The phased implementation spans Windows and Unix servers for scientific, engineering and business operations, virtualized desktops for typical office users as well as more the more demanding graphics intensive CAD users. Other areas discussed in this paper include staff training, load balancing, redundancy, scalability, remote access, disaster readiness and recovery.

  8. A Review of Antimicrobial Peptides and Their Therapeutic Potential as Anti-Infective Drugs

    PubMed Central

    Gordon, Y. Jerold; Romanowski, Eric G.; McDermott, Alison M.

    2006-01-01

    Purpose. Antimicrobial peptides (AMPs) are an essential part of innate immunity that evolved in most living organisms over 2.6 billion years to combat microbial challenge. These small cationic peptides are multifunctional as effectors of innate immunity on skin and mucosal surfaces and have demonstrated direct antimicrobial activity against various bacteria, viruses, fungi, and parasites. This review summarizes their progress to date as commercial antimicrobial drugs for topical and systemic indications. Methods. Literature review. Results. Despite numerous clinical trials, no modified AMP has obtained Food & Drug Administration approval yet for any topical or systemic medical indications. Conclusions. While AMPs are recognized as essential components of natural host innate immunity against microbial challenge, their usefulness as a new class of antimicrobial drugs still remains to be proven. PMID:16020284

  9. A Process to Reduce DC Ingot Butt Curl and Swell

    NASA Astrophysics Data System (ADS)

    Yu, Ho

    1980-11-01

    A simple and effective process to reduce DC ingot butt curl and swell has been developed in the Ingot Casting Division of Alcoa Technical Center.1 In the process, carbon dioxide gas is dissolved under high pressure into the ingot cooling water upstream of the mold during the first several inches of the ingot cast. As the cooling water exits from the mold, the dissolved gas evolves as micron-size bubbles, forming a temporary effective insulation layer on the ingot surface. This reduces thermal stress in the ingot butt. An insulation pad covering about 60% of the bottom block is used in conjunction with the carbon dioxide injection when maximum butt swell reduction is desired. The process, implemented in four Alcoa ingot plants, has proven extremely successful.

  10. Functional differentiability in time-dependent quantum mechanics.

    PubMed

    Penz, Markus; Ruggenthaler, Michael

    2015-03-28

    In this work, we investigate the functional differentiability of the time-dependent many-body wave function and of derived quantities with respect to time-dependent potentials. For properly chosen Banach spaces of potentials and wave functions, Fréchet differentiability is proven. From this follows an estimate for the difference of two solutions to the time-dependent Schrödinger equation that evolve under the influence of different potentials. Such results can be applied directly to the one-particle density and to bounded operators, and present a rigorous formulation of non-equilibrium linear-response theory where the usual Lehmann representation of the linear-response kernel is not valid. Further, the Fréchet differentiability of the wave function provides a new route towards proving basic properties of time-dependent density-functional theory.

  11. TetrUSS Capabilities for S and C Applications

    NASA Technical Reports Server (NTRS)

    Frink, Neal T.; Parikh, Paresh

    2004-01-01

    TetrUSS is a suite of loosely coupled computational fluid dynamics software that is packaged into a complete flow analysis system. The system components consist of tools for geometry setup, grid generation, flow solution, visualization, and various utilities tools. Development began in 1990 and it has evolved into a proven and stable system for Euler and Navier-Stokes analysis and design of unconventional configurations. It is 1) well developed and validated, 2) has a broad base of support, and 3) is presently is a workhorse code because of the level of confidence that has been established through wide use. The entire system can now run on linux or mac architectures. In the following slides, I will highlight more of the features of the VGRID and USM3D codes.

  12. Efficient searching in meshfree methods

    NASA Astrophysics Data System (ADS)

    Olliff, James; Alford, Brad; Simkins, Daniel C.

    2018-04-01

    Meshfree methods such as the Reproducing Kernel Particle Method and the Element Free Galerkin method have proven to be excellent choices for problems involving complex geometry, evolving topology, and large deformation, owing to their ability to model the problem domain without the constraints imposed on the Finite Element Method (FEM) meshes. However, meshfree methods have an added computational cost over FEM that come from at least two sources: increased cost of shape function evaluation and the determination of adjacency or connectivity. The focus of this paper is to formally address the types of adjacency information that arises in various uses of meshfree methods; a discussion of available techniques for computing the various adjacency graphs; propose a new search algorithm and data structure; and finally compare the memory and run time performance of the methods.

  13. Like your labels?

    PubMed

    Field, Michele

    2010-01-01

    The descriptive “conventions” used on food labels are always evolving. Today, however, the changes are so complicated (partly driven by legislation requiring disclosures about environmental impacts, health issues, and geographical provenance) that these labels more often baffle buyers than enlighten them. In a light-handed manner, the article points to how sometimes reading label language can be like deciphering runes—and how if we are familiar with the technical terms, we can find a literal meaning, but still not see the implications. The article could be ten times longer because food labels vary according to cultures—but all food-exporting cultures now take advantage of our short attention-span when faced with these texts. The question is whether less is more—and if so, in this contest for our attention, what “contestant” is voted off.

  14. Geochemical and NdSr isotopic composition of deep-sea turbidites: Crustal evolution and plate tectonic associations

    NASA Astrophysics Data System (ADS)

    McLennan, S. M.; Taylor, S. R.; McCulloch, M. T.; Maynard, J. B.

    1990-07-01

    Petrographic, geochemical, and isotopic data for turbidites from a variety of tectonic settings exhibit considerable variability that is related to tectonic association. Passive margin turbidites (Trailing Edge, Continental Collision) display high framework quartz (Q) content in sands, evolved major element compositions (high Si/Al, K/Na), incompatible element enrichments (high Th/Sc, La/Sc, La/Yb), negative Eu-anomalies and variable Th/U ratios. They have low 143Nd /144Nd and high 87Sr /86Sr ( ɛNd = -26 to -10; 87Sr /86Sr = 0.709 to 0.734 ), indicating a dominance of old upper crustal sources. Active margin settings (Fore Arc, Continental Arc, Back Arc, Strike Slip) commonly exhibit quite different compositions. Th/Sc varies from <0.01 to 1.8, and ɛNd varies from -13.8 to +8.3. Eu-anomalies range from no anomaly ( Eu/Eu ∗ = 1.0 ) to Eu-depletions typical of post-Archean shales ( Eu/Eu ∗ = 0.65 ). Active margin data are explained by mixtures of young arc-derived material, with variable composition and old upper crustal sources. Major element data indicate that passive margin turbidites have experienced more severe weathering histories than those from active settings. Most trace elements are enriched in muds relative to associated sands because of dilution effects from quartz and calcite and concentration of trace elements in clays. Exceptions include Zr, Hf (heavy mineral influence) and Tl (enriched in feldspar) which display enrichments in sands. Active margin sands commonly exhibit higher Eu/Eu ∗ than associated muds, resulting from concentration of plagioclase during sorting. Some associated sands and muds, especially from active settings, have systematic differences in Th/Sc ratios and Nd-isotopic composition, indicating that various provenance components may separate into different grain-size fractions during sedimentary sorting processes. Trace element abundances of modern turbidites, from both active and passive settings, differ from Archean turbidites in several important ways. Modern turbidites have less uniformity, for example, in Th/Sc ratios. On average, modern turbidites have greater depletions in Eu (lower Eu/Eu ∗) than do Archean turbidites, suggesting that the processes of intracrustal differentiation (involving plagioclase fractionation) are of greater importance for crustal evolution at modern continental margins than they were during the Archean. Modern turbidites do not display HREE depletion, a feature commonly seen in Archean data. HREE depletion ( Gd N/Yb N > 2.0 ) in Archean sediments results from incorporation of felsic igneous rocks that were in equilibrium (or their sources were in equilibrium) with garnet sometime in their history. Absence of HREE depletion at modern continental margins suggests that processes of crust formation (or mantle source compositions) may have differed. Differences in trace element abundances for Archean and modern turbidites add support to suggestions that upper continental crust compositions and major processes responsible for continental crust differentiation differed during the Archean. Neodymium model ages, thought to approximate average provenance age, are highly variable ( TDMND = 0-2.6 Ga) in modern turbidites, in contrast with studies that indicate Nd-model ages of lithified Phanerozoic sediment are fairly constant at about 1.5-2.0 Ga. This variability indicates that continental margin sediments incorporate new mantle-derived components, as well as continental crust of widely varying age, during recycling. The apparent dearth of ancient sediments with Nd-model age similar to stratigraphic age supports the suggestion that preservation potential of sediments is related to tectonic setting. Many samples from active settings have isotopic compositions similar to or only slightly evolved from mantle-derived igneous rocks. Subduction of active margin turbidites should be considered in models of crust-mantle recycling. For short-term recycling, such as that postulated for island arc petrogenesis, arc-derived turbidites cannot be easily recognized as a source component because of the lack of time available for isotopic evolution. If turbidites were incorporated into the sources of ocean island volcanics, the isotopic signatures would be considerably more evolved since most models call for long mantle storage times (1.0-2.0 Ga), prior to incorporation. Four provenance components are recognized on the basis of geochemistry and Nd-isotopic composition: (1) Old Upper Continental Crust (old igneous/metamorphic terranes, recycled sediment); (2) Young Undifferentiated Arc (young volcanic/plutonic source that has not experienced plagioclase fractionation); (3) Young Differentiated Arc (young volcanic/plutonic source that has experienced plagioclase fractionation); (4) MORB (minor). Relative proportions of these components are influenced by the plate tectonic association of the provenance and are typically (but not necessarily) reflected in the depositional basin. Provenance of quartzose (mainly passive settings) and non-quartzose (mainly active settings) turbidites can be characterized by bulk composition (e.g., Th/Sc) and Nd-isotopic composition (reflecting age).

  15. Effect of proton-conduction in electrolyte on electric efficiency of multi-stage solid oxide fuel cells

    PubMed Central

    Matsuzaki, Yoshio; Tachikawa, Yuya; Somekawa, Takaaki; Hatae, Toru; Matsumoto, Hiroshige; Taniguchi, Shunsuke; Sasaki, Kazunari

    2015-01-01

    Solid oxide fuel cells (SOFCs) are promising electrochemical devices that enable the highest fuel-to-electricity conversion efficiencies under high operating temperatures. The concept of multi-stage electrochemical oxidation using SOFCs has been proposed and studied over the past several decades for further improving the electrical efficiency. However, the improvement is limited by fuel dilution downstream of the fuel flow. Therefore, evolved technologies are required to achieve considerably higher electrical efficiencies. Here we present an innovative concept for a critically-high fuel-to-electricity conversion efficiency of up to 85% based on the lower heating value (LHV), in which a high-temperature multi-stage electrochemical oxidation is combined with a proton-conducting solid electrolyte. Switching a solid electrolyte material from a conventional oxide-ion conducting material to a proton-conducting material under the high-temperature multi-stage electrochemical oxidation mechanism has proven to be highly advantageous for the electrical efficiency. The DC efficiency of 85% (LHV) corresponds to a net AC efficiency of approximately 76% (LHV), where the net AC efficiency refers to the transmission-end AC efficiency. This evolved concept will yield a considerably higher efficiency with a much smaller generation capacity than the state-of-the-art several tens-of-MW-class most advanced combined cycle (MACC). PMID:26218470

  16. Can multilayer brain networks be a real step forward?. Comment on "Network science of biological systems at different scales: A review" by M. Gosak et al.

    NASA Astrophysics Data System (ADS)

    Buldú, Javier M.; Papo, David

    2018-03-01

    Over the last two decades Network Science has become one of the most active fields in science, whose growth has been supported by four fundamental pillars: statistical physics, nonlinear dynamics, graph theory and Big Data [1]. Initially concerned with analyzing the structure of networks, Network Science rapidly turned its attention, focused on the implications of network topology, on the dynamics of and processes unfolding on networked systems, greatly improving our understanding of diffusion, synchronization, epidemics and information transmission in complex systems [2]. The network approach typically considered complex systems as evolving in a vacuum; however real networks are generally not isolated systems, but are in continuous and evolving contact with other networks, with which they interact in multiple qualitative different and typically time-varying ways. These systems can then be represented as a collection of subsystems with connectivity layers, which are simply collapsed when considering the traditional monolayer representation. Surprisingly, such an "unpacking" of layers has proven to bear profound consequences on the structural and dynamical properties of networks, leading for instance to counter-intuitive synchronization phenomena, where maximization synchronization is achieved through strategies opposite of those maximizing synchronization in isolated networks [3].

  17. Effect of proton-conduction in electrolyte on electric efficiency of multi-stage solid oxide fuel cells.

    PubMed

    Matsuzaki, Yoshio; Tachikawa, Yuya; Somekawa, Takaaki; Hatae, Toru; Matsumoto, Hiroshige; Taniguchi, Shunsuke; Sasaki, Kazunari

    2015-07-28

    Solid oxide fuel cells (SOFCs) are promising electrochemical devices that enable the highest fuel-to-electricity conversion efficiencies under high operating temperatures. The concept of multi-stage electrochemical oxidation using SOFCs has been proposed and studied over the past several decades for further improving the electrical efficiency. However, the improvement is limited by fuel dilution downstream of the fuel flow. Therefore, evolved technologies are required to achieve considerably higher electrical efficiencies. Here we present an innovative concept for a critically-high fuel-to-electricity conversion efficiency of up to 85% based on the lower heating value (LHV), in which a high-temperature multi-stage electrochemical oxidation is combined with a proton-conducting solid electrolyte. Switching a solid electrolyte material from a conventional oxide-ion conducting material to a proton-conducting material under the high-temperature multi-stage electrochemical oxidation mechanism has proven to be highly advantageous for the electrical efficiency. The DC efficiency of 85% (LHV) corresponds to a net AC efficiency of approximately 76% (LHV), where the net AC efficiency refers to the transmission-end AC efficiency. This evolved concept will yield a considerably higher efficiency with a much smaller generation capacity than the state-of-the-art several tens-of-MW-class most advanced combined cycle (MACC).

  18. NASA's Space Launch System: Moving Toward the Launch Pad

    NASA Technical Reports Server (NTRS)

    Creech, Stephen D.; May, Todd A.

    2013-01-01

    The National Aeronautics and Space Administration's (NASA's) Space Launch System (SLS) Program, managed at the Marshall Space Flight Center (MSFC), is making progress toward delivering a new capability for human space flight and scientific missions beyond Earth orbit. Designed with the goals of safety, affordability, and sustainability in mind, the SLS rocket will launch the Orion Multi-Purpose Crew Vehicle (MPCV), equipment, supplies, and major science missions for exploration and discovery. Supporting Orion's first autonomous flight to lunar orbit and back in 2017 and its first crewed flight in 2021, the SLS will evolve into the most powerful launch vehicle ever flown via an upgrade approach that will provide building blocks for future space exploration. NASA is working to deliver this new capability in an austere economic climate, a fact that has inspired the SLS team to find innovative solutions to the challenges of designing, developing, fielding, and operating the largest rocket in history. This paper will summarize the planned capabilities of the vehicle, the progress the SLS Program has made in the 2 years since the Agency formally announced its architecture in September 2011, the path it is following to reach the launch pad in 2017 and then to evolve the 70 metric ton (t) initial lift capability to 130-t lift capability after 2021. The paper will explain how, to meet the challenge of a flat funding curve, an architecture was chosen that combines the use and enhancement of legacy systems and technology with strategic new developments that will evolve the launch vehicle's capabilities. This approach reduces the time and cost of delivering the initial 70 t Block 1 vehicle, and reduces the number of parallel development investments required to deliver the evolved 130 t Block 2 vehicle. The paper will outline the milestones the program has already reached, from developmental milestones such as the manufacture of the first flight hardware, to life-cycle milestones such as the vehicle's Preliminary Design Review (PDR). The paper will also discuss the remaining challenges both in delivering the 70-t vehicle and in evolving its capabilities to the 130-t vehicle, and how NASA plans to accomplish these goals. As this paper will explain, SLS is making measurable progress toward becoming a global infrastructure asset for robotic and human scouts of all nations by harnessing business and technological innovations to deliver sustainable solutions for space exploration.

  19. Electronic processing and control system with programmable hardware

    NASA Technical Reports Server (NTRS)

    Alkalaj, Leon (Inventor); Fang, Wai-Chi (Inventor); Newell, Michael A. (Inventor)

    1998-01-01

    A computer system with reprogrammable hardware allowing dynamically allocating hardware resources for different functions and adaptability for different processors and different operating platforms. All hardware resources are physically partitioned into system-user hardware and application-user hardware depending on the specific operation requirements. A reprogrammable interface preferably interconnects the system-user hardware and application-user hardware.

  20. New Geologic Map and Structural Cross Sections of the Death Valley Extended Terrain (southern Sierra Nevada, California to Spring Mountains, Nevada): Toward 3D Kinematic Reconstructions

    NASA Astrophysics Data System (ADS)

    Lutz, B. M.; Axen, G. J.; Phillips, F. M.

    2017-12-01

    Tectonic reconstructions for the Death Valley extended terrain (S. Sierra Nevada to Spring Mountains) have evolved to include a growing number of offset markers for strike-slip fault systems but are mainly map view (2D) and do not incorporate a wealth of additional constraints. We present a new 1:300,000 digital geologic map and structural cross sections, which provide a geometric framework for stepwise 3D reconstructions of Late Cenozoic extension and transtension. 3D models will decipher complex relationships between strike-slip, normal, and detachment faults and their role in accommodating large magnitude extension/rigid block rotation. Fault coordination is key to understanding how extensional systems and transform margins evolve with changing boundary conditions. 3D geometric and kinematic analysis adds key strain compatibility unavailable in 2D reconstructions. The stratigraphic framework of Fridrich and Thompson (2011) is applied to rocks outside of Death Valley. Cenozoic basin deposits are grouped into 6 assemblages differentiated by age, provenance, and bounding unconformities, which reflect Pacific-North American plate boundary events. Pre-Cenozoic rocks are grouped for utility: for example, Cararra Formation equivalents are grouped because they form a Cordilleran thrust decollement zone. Offset markers are summarized in the associated tectonic map. Other constraints include fault geometries and slip rates, age, geometry and provenance of Cenozoic basins, gravity, cooling histories of footwalls, and limited seismic/well data. Cross sections were constructed parallel to net-transport directions of fault blocks. Surface fault geometries were compiled from previous mapping and projected to depth using seismic/gravity data. Cooling histories of footwalls guided geometric interpretation of uplifted detachment footwalls. Mesh surfaces will be generated from 2D section lines to create a framework for stepwise 3D reconstruction of extension and transtension in the study area. Analysis of all available data in a seamless 3D framework should force more unique solutions to outstanding kinematic problems, provide a better understanding of the Cordilleran thrust belt, and constrain the mechanisms of strain partitioning between the upper and lower crust.

  1. Realizing the Living Paper using the ProvONE Model for Reproducible Research

    NASA Astrophysics Data System (ADS)

    Jones, M. B.; Jones, C. S.; Ludäscher, B.; Missier, P.; Walker, L.; Slaughter, P.; Schildhauer, M.; Cuevas-Vicenttín, V.

    2015-12-01

    Science has advanced through traditional publications that codify research results as a permenant part of the scientific record. But because publications are static and atomic, researchers can only cite and reference a whole work when building on prior work of colleagues. The open source software model has demonstrated a new approach in which strong version control in an open environment can nurture an open ecosystem of software. Developers now commonly fork and extend software giving proper credit, with less repetition, and with confidence in the relationship to original software. Through initiatives like 'Beyond the PDF', an analogous model has been imagined for open science, in which software, data, analyses, and derived products become first class objects within a publishing ecosystem that has evolved to be finer-grained and is realized through a web of linked open data. We have prototyped a Living Paper concept by developing the ProvONE provenance model for scientific workflows, with prototype deployments in DataONE. ProvONE promotes transparency and openness by describing the authenticity, origin, structure, and processing history of research artifacts and by detailing the steps in computational workflows that produce derived products. To realize the Living Paper, we decompose scientific papers into their constituent products and publish these as compound objects in the DataONE federation of archival repositories. Each individual finding and sub-product of a reseach project (such as a derived data table, a workflow or script, a figure, an image, or a finding) can be independently stored, versioned, and cited. ProvONE provenance traces link these fine-grained products within and across versions of a paper, and across related papers that extend an original analysis. This allows for open scientific publishing in which researchers extend and modify findings, creating a dynamic, evolving web of results that collectively represent the scientific enterprise. The Living Paper provides detailed metadata for properly interpreting and verifying individual research findings, for tracing the origin of ideas, for launching new lines of inquiry, and for implementing transitive credit for research and engineering.

  2. [Chronic pancreatitis diagnosed after the first attack of acute pancreatitis].

    PubMed

    Bojková, Martina; Dítě, Petr; Uvírová, Magdalena; Dvořáčková, Nina; Kianička, Bohuslav; Kupka, Tomáš; Svoboda, Pavel; Klvaňa, Pavel; Martínek, Arnošt

    2016-02-01

    One of the diseases involving a potential risk of developing chronic pancreatitis is acute pancreatitis. Of the overall number of 231 individuals followed with a diagnosis of chronic pancreatitis, 56 patients were initially treated for acute pancreatitis (24.2 %). Within an interval of 12- 24 months from the first attack of acute pancreatitis, their condition gradually progressed to reached the picture of chronic pancreatitis. The individuals included in the study abstained (from alcohol) following the first attack of acute pancreatitis and no relapse of acute pancreatitis was proven during the period of their monitoring. The etiology of acute pancreatitis identified alcohol as the predominant cause (55.3 %), biliary etiology was proven in 35.7 %. According to the revised Atlanta classification, severe pancreatitis was established in 69.6 % of the patients, the others met the criterion for intermediate form, those with the light form were not included. Significant risk factors present among the patients were smoking, obesity and 18 %, resp. 25.8 % had pancreatogenous diabetes mellitus identified. 88.1 % of the patients with acute pancreatitis were smokers. The majority of individuals with chronic pancreatitis following an attack of acute pancreatitis were of a productive age from 25 to 50 years. It is not only acute alcoholic pancreatitis which evolves into chronic pancreatitis, we have also identified this transition for pancreatitis of biliary etiology.

  3. The LapSim virtual reality simulator: promising but not yet proven.

    PubMed

    Fairhurst, Katherine; Strickland, Andrew; Maddern, Guy

    2011-02-01

    The acquisition of technical skills using surgical simulators is an area of active research and rapidly evolving technology. The LapSim is a virtual reality simulator that currently allows practice of basic laparoscopic skills and some procedures. To date, no reviews have been published with reference to a single virtual reality simulator. A PubMed search was performed using the keyword "LapSim," with further papers identified from the citations of original search articles. Use of the LapSim to develop surgical skills has yielded overall results, although inconsistencies exist. Data regarding the transferability of learned skills to the operative environment are encouraging as is the validation work, particularly the use of a combination of measured parameters to produce an overall comparative performance score. Although the LapSim currently does not have any proven significant advantages over video trainers in terms of basic skills instruction and although the results of validation studies are variable, the potential for such technology to have a huge impact on surgical training is apparent. Work to determine standardized learning curves and proficiency criteria for different levels of trainees is incomplete. Moreover, defining which performance parameters measured by the LapSim accurately determine laparoscopic skill is complex. Further technological advances will undoubtedly improve the efficacy of the LapSim, and the results of large multicenter trials are anticipated.

  4. Cyber-workstation for computational neuroscience.

    PubMed

    Digiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C; Fortes, Jose; Sanchez, Justin C

    2010-01-01

    A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface.

  5. Extraction-Separation Performance and Dynamic Modeling of Orion Test Vehicles with Adams Simulation: 3rd Edition

    NASA Technical Reports Server (NTRS)

    Varela, Jose G.; Reddy, Satish; Moeller, Enrique; Anderson, Keith

    2017-01-01

    NASA's Orion Capsule Parachute Assembly System (CPAS) Project is now in the qualification phase of testing, and the Adams simulation has continued to evolve to model the complex dynamics experienced during the test article extraction and separation phases of flight. The ability to initiate tests near the upper altitude limit of the Orion parachute deployment envelope requires extractions from the aircraft at 35,000 ft-MSL. Engineering development phase testing of the Parachute Test Vehicle (PTV) carried by the Carriage Platform Separation System (CPSS) at altitude resulted in test support equipment hardware failures due to increased energy caused by higher true airspeeds. As a result, hardware modifications became a necessity requiring ground static testing of the textile components to be conducted and a new ground dynamic test of the extraction system to be devised. Force-displacement curves from static tests were incorporated into the Adams simulations, allowing prediction of loads, velocities and margins encountered during both flight and ground dynamic tests. The Adams simulation was then further refined by fine tuning the damping terms to match the peak loads recorded in the ground dynamic tests. The failure observed in flight testing was successfully replicated in ground testing and true safety margins of the textile components were revealed. A multi-loop energy modulator was then incorporated into the system level Adams simulation model and the effect on improving test margins be properly evaluated leading to high confidence ground verification testing of the final design solution.

  6. Needs challenge software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-07-01

    New hardware and software tools build on existing platforms and add performance and ease-of-use benefits as the struggle to find and produce hydrocarbons at the lowest cost becomes more and more competitive. Software tools now provide geoscientists and petroleum engineers with a better understanding of reservoirs from the shape and makeup of formation to behavior projections as hydrocarbons are extracted. Petroleum software tools allow scientists to simulate oil flow, predict the life expectancy of a reservoir, and even help determine how to extend the life and economic viability of the reservoir. The requirement of the petroleum industry to find andmore » extract petroleum more efficiently drives the solutions provided by software and service companies. To one extent or another, most of the petroleum software products available today have achieved an acceptable level of competency. Innovative, high-impact products from small, focussed companies often were bought out by larger companies with deeper pockets if their developers couldn`t fund their expansion. Other products disappeared from the scene, because they were unable to evolve fast enough to compete. There are still enough small companies around producing excellent products to prevent the marketplace from feeling too narrow and lacking in choice. Oil companies requiring specific solutions to their problems have helped fund product development within the commercial sector. As the industry has matured, strategic alliances between vendors, both hardware and software, have provided market advantages, often combining strengths to enter new and undeveloped areas for technology. The pace of technological development has been fast and constant.« less

  7. Portable Radiation Package (PRP) Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reynolds, R Michael

    The Portable Radiation Package (PRP) was developed to provide basic radiation information in locations such as ships at sea where proper exposure is remote and difficult, the platform is in motion, and azimuth alignment is not fixed. Development of the PRP began at Brookhaven National Laboratory (BNL) in the mid-1990s and versions of it were deployed on ships in the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility’s Nauru-99 project. The PRP was deployed on ships in support of the National Aeronautics and Space Administration (NASA) Sensor Intercomparison for Marine Biological and Interdisciplinary Ocean Studies (SIMBIOS)more » program. Over the years the measurements have remained the same while the post-processing data analysis, especially for the FRSR, has evolved. This document describes the next-generation Portable Radiation Package (PRP2) that was developed for the DOE ARM Facility, under contract no. 9F-31462 from Argonne National Laboratory (ANL). The PRP2 has the same scientific principles that were well validated in prior studies, but has upgraded electronic hardware. The PRP2 approach is completely modular, both in hardware and software. Each sensor input is treated as a separate serial stream into the data collection computer. In this way the operator has complete access to each component of the system for purposes of error checking, calibration, and maintenance. The resulting system is more reliable, easier to install in complex situations, and more amenable to upgrade.« less

  8. Cyber-Workstation for Computational Neuroscience

    PubMed Central

    DiGiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C.; Fortes, Jose; Sanchez, Justin C.

    2009-01-01

    A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface. PMID:20126436

  9. In vivo confocal microscopy of the cornea: New developments in image acquisition, reconstruction and analysis using the HRT-Rostock Corneal Module

    PubMed Central

    Petroll, W. Matthew; Robertson, Danielle M.

    2015-01-01

    The optical sectioning ability of confocal microscopy allows high magnification images to be obtained from different depths within a thick tissue specimen, and is thus ideally suited to the study of intact tissue in living subjects. In vivo confocal microscopy has been used in a variety of corneal research and clinical applications since its development over 25 years ago. In this article we review the latest developments in quantitative corneal imaging with the Heidelberg Retinal Tomograph with Rostock Corneal Module (HRT-RCM). We provide an overview of the unique strengths and weaknesses of the HRT-RCM. We discuss techniques for performing 3-D imaging with the HRT-RCM, including hardware and software modifications that allow full thickness confocal microscopy through focusing (CMTF) of the cornea, which can provide quantitative measurements of corneal sublayer thicknesses, stromal cell and extracellular matrix backscatter, and depth dependent changes in corneal keratocyte density. We also review current approaches for quantitative imaging of the subbasal nerve plexus, which require a combination of advanced image acquisition and analysis procedures, including wide field mapping and 3-D reconstruction of nerve structures. The development of new hardware, software, and acquisition techniques continues to expand the number of applications of the HRT-RCM for quantitative in vivo corneal imaging at the cellular level. Knowledge of these rapidly evolving strategies should benefit corneal clinicians and basic scientists alike. PMID:25998608

  10. Towards Batched Linear Solvers on Accelerated Hardware Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haidar, Azzam; Dong, Tingzing Tim; Tomov, Stanimire

    2015-01-01

    As hardware evolves, an increasingly effective approach to develop energy efficient, high-performance solvers, is to design them to work on many small and independent problems. Indeed, many applications already need this functionality, especially for GPUs, which are known to be currently about four to five times more energy efficient than multicore CPUs for every floating-point operation. In this paper, we describe the development of the main one-sided factorizations: LU, QR, and Cholesky; that are needed for a set of small dense matrices to work in parallel. We refer to such algorithms as batched factorizations. Our approach is based on representingmore » the algorithms as a sequence of batched BLAS routines for GPU-contained execution. Note that this is similar in functionality to the LAPACK and the hybrid MAGMA algorithms for large-matrix factorizations. But it is different from a straightforward approach, whereby each of GPU's symmetric multiprocessors factorizes a single problem at a time. We illustrate how our performance analysis together with the profiling and tracing tools guided the development of batched factorizations to achieve up to 2-fold speedup and 3-fold better energy efficiency compared to our highly optimized batched CPU implementations based on the MKL library on a two-sockets, Intel Sandy Bridge server. Compared to a batched LU factorization featured in the NVIDIA's CUBLAS library for GPUs, we achieves up to 2.5-fold speedup on the K40 GPU.« less

  11. An endogenous growth pattern of roots is revealed in seedlings grown in microgravity.

    PubMed

    Millar, Katherine D L; Johnson, Christina M; Edelmann, Richard E; Kiss, John Z

    2011-10-01

    In plants, sensitive and selective mechanisms have evolved to perceive and respond to light and gravity. We investigated the effects of microgravity on the growth and development of Arabidopsis thaliana (ecotype Landsberg) in a spaceflight experiment. These studies were performed with the Biological Research in Canisters (BRIC) hardware system in the middeck region of the space shuttle during mission STS-131 in April 2010. Seedlings were grown on nutrient agar in Petri dishes in BRIC hardware under dark conditions and then fixed in flight with paraformaldehyde, glutaraldehyde, or RNAlater. Although the long-term objective was to study the role of the actin cytoskeleton in gravity perception, in this article we focus on the analysis of morphology of seedlings that developed in microgravity. While previous spaceflight studies noted deleterious morphological effects due to the accumulation of ethylene gas, no such effects were observed in seedlings grown with the BRIC system. Seed germination was 89% in the spaceflight experiment and 91% in the ground control, and seedlings grew equally well in both conditions. However, roots of space-grown seedlings exhibited a significant difference (compared to the ground controls) in overall growth patterns in that they skewed to one direction. In addition, a greater number of adventitious roots formed from the axis of the hypocotyls in the flight-grown plants. Our hypothesis is that an endogenous response in plants causes the roots to skew and that this default growth response is largely masked by the normal 1 g conditions on Earth.

  12. Nanotoxicology and nanomedicine: making development decisions in an evolving governance environment

    NASA Astrophysics Data System (ADS)

    Rycroft, Taylor; Trump, Benjamin; Poinsatte-Jones, Kelsey; Linkov, Igor

    2018-02-01

    The fields of nanomedicine, risk analysis, and decision science have evolved considerably in the past decade, providing developers of nano-enabled therapies and diagnostic tools with more complete information than ever before and shifting a fundamental requisite of the nanomedical community from the need for more information about nanomaterials to the need for a streamlined method of integrating the abundance of nano-specific information into higher-certainty product design decisions. The crucial question facing nanomedicine developers that must select the optimal nanotechnology in a given situation has shifted from "how do we estimate nanomaterial risk in the absence of good risk data?" to "how can we derive a holistic characterization of the risks and benefits that a given nanomaterial may pose within a specific nanomedical application?" Many decision support frameworks have been proposed to assist with this inquiry; however, those based in multicriteria decision analysis have proven to be most adaptive in the rapidly evolving field of nanomedicine—from the early stages of the field when conditions of significant uncertainty and incomplete information dominated, to today when nanotoxicology and nano-environmental health and safety information is abundant but foundational paradigms such as chemical risk assessment, risk governance, life cycle assessment, safety-by-design, and stakeholder engagement are undergoing substantial reformation in an effort to address the needs of emerging technologies. In this paper, we reflect upon 10 years of developments in nanomedical engineering and demonstrate how the rich knowledgebase of nano-focused toxicological and risk assessment information developed over the last decade enhances the capability of multicriteria decision analysis approaches and underscores the need to continue the transition from traditional risk assessment towards risk-based decision-making and alternatives-based governance for emerging technologies.

  13. Paleozoic tectonics of the Ouachita Orogen through Nd isotopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gleason, J.D.; Patchett, P.J.; Dickinson, W.R.

    1992-01-01

    A combined isotopic and trace-element study of the Late Paleozoic Ouachita Orogenic belt has the following goals: (1) define changing provenance of Ouachita sedimentary systems throughout the Paleozoic; (2) constrain sources feeding into the Ouachita flysch trough during the Late Paleozoic; (3) isolate the geochemical signature of proposed colliding terranes to the south; (4) build a data base to compare with possible Ouachita System equivalents in Mexico. The ultimate aim is to constrain the tectonic setting of the southern margin of North America during the Paleozoic, with particular emphasis on collisional events leading to the final suturing of Pangea. Ndmore » isotopic data identify 3 distinct groups: (1) Ordovician passive margin sequence; (2) Carboniferous proto-flysch (Stanley Fm.), main flysch (Jackfork and Atoka Fms.) and molasse (foreland Atoka Fm.); (3) Mississippian ash-flow tuffs. The authors interpret the Ordovician signature to be essentially all craton-derived, whereas the Carboniferous signature reflects mixed sources from the craton plus orogenic sources to the east and possibly the south, including the evolving Appalachian Orogen. The proposed southern source is revealed by the tuffs to be too old and evolved to be a juvenile island arc terrane. They interpret the tuffs to have been erupted in a continental margin arc-type setting. Surprisingly, the foreland molasse sequence is indistinguishable from the main trough flysch sequence, suggesting the Ouachita trough and the craton were both inundated with sediment of a single homogenized isotopic signature during the Late Carboniferous. The possibility that Carboniferous-type sedimentary dispersal patterns began as early as the Silurian has important implications for the tectonics and paleogeography of the evolving Appalachian-Ouachita Orogenic System.« less

  14. Digital transplantation pathology: combining whole slide imaging, multiplex staining and automated image analysis.

    PubMed

    Isse, K; Lesniak, A; Grama, K; Roysam, B; Minervini, M I; Demetris, A J

    2012-01-01

    Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. "-Omics" analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: (a) spatial-temporal relationships; (b) rare events/cells; (c) complex structural context; and (d) integration into a "systems" model. Nevertheless, except for immunostaining, no transformative advancements have "modernized" routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology-global "-omic" analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes. ©Copyright 2011 The American Society of Transplantation and the American Society of Transplant Surgeons.

  15. Stewardship of NASA's Earth Science Data and Ensuring Long-Term Active Archives

    NASA Technical Reports Server (NTRS)

    Ramapriyan, Hampapuram K.; Behnke, Jeanne

    2016-01-01

    Program, NASA has followed an open data policy, with non-discriminatory access to data with no period of exclusive access. NASA has well-established processes for assigning and or accepting datasets into one of 12 Distributed Active Archive Centers (DAACs) that are parts of EOSDIS. EOSDIS has been evolving through several information technology cycles, adapting to hardware and software changes in the commercial sector. NASA is responsible for maintaining Earth science data as long as users are interested in using them for research and applications, which is well beyond the life of the data gathering missions. For science data to remain useful over long periods of time, steps must be taken to preserve: (1) Data bits with no corruption, (2) Discoverability and access, (3) Readability, (4) Understandability, (5) Usability' and (6). Reproducibility of results. NASAs Earth Science data and Information System (ESDIS) Project, along with the 12 EOSDIS Distributed Active Archive Centers (DAACs), has made significant progress in each of these areas over the last decade, and continues to evolve its active archive capabilities. Particular attention is being paid in recent years to ensure that the datasets are published in an easily accessible and citable manner through a unified metadata model, a common metadata repository (CMR), a coherent view through the earthdata.gov website, and assignment of Digital Object Identifiers (DOI) with well-designed landing product information pages.

  16. Emerging computer technologies and the news media of the future

    NASA Technical Reports Server (NTRS)

    Vrabel, Debra A.

    1993-01-01

    The media environment of the future may be dramatically different from what exists today. As new computing and communications technologies evolve and synthesize to form a global, integrated communications system of networks, public domain hardware and software, and consumer products, it will be possible for citizens to fulfill most information needs at any time and from any place, to obtain desired information easily and quickly, to obtain information in a variety of forms, and to experience and interact with information in a variety of ways. This system will transform almost every institution, every profession, and every aspect of human life--including the creation, packaging, and distribution of news and information by media organizations. This paper presents one vision of a 21st century global information system and how it might be used by citizens. It surveys some of the technologies now on the market that are paving the way for new media environment.

  17. Protein crystal growth in a microgravity environment

    NASA Technical Reports Server (NTRS)

    Bugg, Charles E.

    1988-01-01

    Protein crystal growth is a major experimental problem and is the bottleneck in widespread applications of protein crystallography. Research efforts now being pursued and sponsored by NASA are making fundamental contributions to the understanding of the science of protein crystal growth. Microgravity environments offer the possibility of performing new types of experiments that may produce a better understanding of protein crystal growth processes and may permit growth environments that are more favorable for obtaining high quality protein crystals. A series of protein crystal growth experiments using the space shuttle was initiated. The first phase of these experiments was focused on the development of micro-methods for protein crystal growth by vapor diffusion techniques, using a space version of the hanging drop method. The preliminary space experiments were used to evolve prototype hardware that will form the basis for a more advanced system that can be used to evaluate effects of gravity on protein crystal growth.

  18. GPU-Accelerated Molecular Modeling Coming Of Age

    PubMed Central

    Stone, John E.; Hardy, David J.; Ufimtsev, Ivan S.

    2010-01-01

    Graphics processing units (GPUs) have traditionally been used in molecular modeling solely for visualization of molecular structures and animation of trajectories resulting from molecular dynamics simulations. Modern GPUs have evolved into fully programmable, massively parallel co-processors that can now be exploited to accelerate many scientific computations, typically providing about one order of magnitude speedup over CPU code and in special cases providing speedups of two orders of magnitude. This paper surveys the development of molecular modeling algorithms that leverage GPU computing, the advances already made and remaining issues to be resolved, and the continuing evolution of GPU technology that promises to become even more useful to molecular modeling. Hardware acceleration with commodity GPUs is expected to benefit the overall computational biology community by bringing teraflops performance to desktop workstations and in some cases potentially changing what were formerly batch-mode computational jobs into interactive tasks. PMID:20675161

  19. Computing element evolution towards Exascale and its impact on legacy simulation codes

    NASA Astrophysics Data System (ADS)

    Colin de Verdière, Guillaume J. L.

    2015-12-01

    In the light of the current race towards the Exascale, this article highlights the main features of the forthcoming computing elements that will be at the core of next generations of supercomputers. The market analysis, underlying this work, shows that computers are facing a major evolution in terms of architecture. As a consequence, it is important to understand the impacts of those evolutions on legacy codes or programming methods. The problems of dissipated power and memory access are discussed and will lead to a vision of what should be an exascale system. To survive, programming languages had to respond to the hardware evolutions either by evolving or with the creation of new ones. From the previous elements, we elaborate why vectorization, multithreading, data locality awareness and hybrid programming will be the key to reach the exascale, implying that it is time to start rewriting codes.

  20. The ASP Sensor Network: Infrastructure for the Next Generation of NASA Airborne Science

    NASA Astrophysics Data System (ADS)

    Myers, J. S.; Sorenson, C. E.; Van Gilst, D. P.; Duley, A.

    2012-12-01

    A state-of-the-art real-time data communications network is being implemented across the NASA Airborne Science Program core platforms. Utilizing onboard Ethernet networks and satellite communications systems, it is intended to maximize the science return from both single-platform missions and complex multi-aircraft Earth science campaigns. It also provides an open platform for data visualization and synthesis software tools, for use by the science instrument community. This paper will describe the prototype implementations currently deployed on the NASA DC-8 and Global Hawk aircraft, and the ongoing effort to expand the capability to other science platforms. Emphasis will be on the basic network architecture, the enabling hardware, and new standardized instrument interfaces. The new Mission Tools Suite, which provides an web-based user interface, will be also described; together with several example use-cases of this evolving technology.

  1. Advanced Spacesuit Portable Life Support System Packaging Concept Mock-Up Design & Development

    NASA Technical Reports Server (NTRS)

    O''Connell, Mary K.; Slade, Howard G.; Stinson, Richard G.

    1998-01-01

    A concentrated development effort was begun at NASA Johnson Space Center to create an advanced Portable Life Support System (PLSS) packaging concept. Ease of maintenance, technological flexibility, low weight, and minimal volume are targeted in the design of future micro-gravity and planetary PLSS configurations. Three main design concepts emerged from conceptual design techniques and were carried forth into detailed design, then full scale mock-up creation. "Foam", "Motherboard", and "LEGOtm" packaging design concepts are described in detail. Results of the evaluation process targeted maintenance, robustness, mass properties, and flexibility as key aspects to a new PLSS packaging configuration. The various design tools used to evolve concepts into high fidelity mock ups revealed that no single tool was all encompassing, several combinations were complimentary, the devil is in the details, and, despite efforts, many lessons were learned only after working with hardware.

  2. Background and programmatic approach for the development of orbital fluid resupply tankers

    NASA Technical Reports Server (NTRS)

    Griffin, J. W.

    1986-01-01

    Onorbit resupply of fluids will be essential to the evolving generation of large and long-life orbital stations and satellites. These types of services are also needed to improve the economics of space operations, and not only optimize the expenditures for government funded programs, but also pave the way for commercial development of space resources. To meet these requirements, a family of tankers must be developed to resupply a variety of fluids. Economics of flight hardware development will require that each tanker within this family be capable of satisfying a variety of functions, including not only fluid resupply from the Space Shuttle Orbiter, but also resupply from Space Station and the orbital maneuvering vehicle (OMV). This paper discusses the justification, the programmatic objectives, and the advanced planning within NASA for the development of this fleet of multifunction orbital fluid resupply tankers.

  3. Experiments with a Parallel Multi-Objective Evolutionary Algorithm for Scheduling

    NASA Technical Reports Server (NTRS)

    Brown, Matthew; Johnston, Mark D.

    2013-01-01

    Evolutionary multi-objective algorithms have great potential for scheduling in those situations where tradeoffs among competing objectives represent a key requirement. One challenge, however, is runtime performance, as a consequence of evolving not just a single schedule, but an entire population, while attempting to sample the Pareto frontier as accurately and uniformly as possible. The growing availability of multi-core processors in end user workstations, and even laptops, has raised the question of the extent to which such hardware can be used to speed up evolutionary algorithms. In this paper we report on early experiments in parallelizing a Generalized Differential Evolution (GDE) algorithm for scheduling long-range activities on NASA's Deep Space Network. Initial results show that significant speedups can be achieved, but that performance does not necessarily improve as more cores are utilized. We describe our preliminary results and some initial suggestions from parallelizing the GDE algorithm. Directions for future work are outlined.

  4. Darwin Core: An Evolving Community-Developed Biodiversity Data Standard

    PubMed Central

    Wieczorek, John; Bloom, David; Guralnick, Robert; Blum, Stan; Döring, Markus; Giovanni, Renato; Robertson, Tim; Vieglais, David

    2012-01-01

    Biodiversity data derive from myriad sources stored in various formats on many distinct hardware and software platforms. An essential step towards understanding global patterns of biodiversity is to provide a standardized view of these heterogeneous data sources to improve interoperability. Fundamental to this advance are definitions of common terms. This paper describes the evolution and development of Darwin Core, a data standard for publishing and integrating biodiversity information. We focus on the categories of terms that define the standard, differences between simple and relational Darwin Core, how the standard has been implemented, and the community processes that are essential for maintenance and growth of the standard. We present case-study extensions of the Darwin Core into new research communities, including metagenomics and genetic resources. We close by showing how Darwin Core records are integrated to create new knowledge products documenting species distributions and changes due to environmental perturbations. PMID:22238640

  5. Flight simulation software at NASA Dryden Flight Research Center

    NASA Technical Reports Server (NTRS)

    Norlin, Ken A.

    1995-01-01

    The NASA Dryden Flight Research Center has developed a versatile simulation software package that is applicable to a broad range of fixed-wing aircraft. This package has evolved in support of a variety of flight research programs. The structure is designed to be flexible enough for use in batch-mode, real-time pilot-in-the-loop, and flight hardware-in-the-loop simulation. Current simulations operate on UNIX-based platforms and are coded with a FORTRAN shell and C support routines. This paper discusses the features of the simulation software design and some basic model development techniques. The key capabilities that have been included in the simulation are described. The NASA Dryden simulation software is in use at other NASA centers, within industry, and at several universities. The straightforward but flexible design of this well-validated package makes it especially useful in an engineering environment.

  6. The Max Launch Abort System - Concept, Flight Test, and Evolution

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    2014-01-01

    The NASA Engineering and Safety Center (NESC) is an independent engineering analysis and test organization providing support across the range of NASA programs. In 2007 NASA was developing the launch escape system for the Orion spacecraft that was evolved from the traditional tower-configuration escape systems used for the historic Mercury and Apollo spacecraft. The NESC was tasked, as a programmatic risk-reduction effort to develop and flight test an alternative to the Orion baseline escape system concept. This project became known as the Max Launch Abort System (MLAS), named in honor of Maxime Faget, the developer of the original Mercury escape system. Over the course of approximately two years the NESC performed conceptual and tradeoff analyses, designed and built full-scale flight test hardware, and conducted a flight test demonstration in July 2009. Since the flight test, the NESC has continued to further develop and refine the MLAS concept.

  7. Inter-Module Ventilation Changes to the International Space Station Vehicle to Support Integration of the International Docking Adapter and Commercial Crew Vehicles

    NASA Technical Reports Server (NTRS)

    Link, Dwight E., Jr.; Balistreri, Steven F., Jr.

    2015-01-01

    The International Space Station (ISS) Environmental Control and Life Support System (ECLSS) is continuing to evolve in the post-Space Shuttle era. The ISS vehicle configuration that is in operation was designed for docking of a Space Shuttle vehicle, and designs currently under development for commercial crew vehicles require different interfaces. The ECLSS Temperature and Humidity Control Subsystem (THC) Inter-Module Ventilation (IMV) must be modified in order to support two docking interfaces at the forward end of ISS, to provide the required air exchange. Development of a new higher-speed IMV fan and extensive ducting modifications are underway to support the new Commercial Crew Vehicle interfaces. This paper will review the new ECLSS IMV development requirements, component design and hardware status, subsystem analysis and testing performed to date, and implementation plan to support Commercial Crew Vehicle docking.

  8. Trends in communicative access solutions for children with cerebral palsy.

    PubMed

    Myrden, Andrew; Schudlo, Larissa; Weyand, Sabine; Zeyl, Timothy; Chau, Tom

    2014-08-01

    Access solutions may facilitate communication in children with limited functional speech and motor control. This study reviews current trends in access solution development for children with cerebral palsy, with particular emphasis on the access technology that harnesses a control signal from the user (eg, movement or physiological change) and the output device (eg, augmentative and alternative communication system) whose behavior is modulated by the user's control signal. Access technologies have advanced from simple mechanical switches to machine vision (eg, eye-gaze trackers), inertial sensing, and emerging physiological interfaces that require minimal physical effort. Similarly, output devices have evolved from bulky, dedicated hardware with limited configurability, to platform-agnostic, highly personalized mobile applications. Emerging case studies encourage the consideration of access technology for all nonverbal children with cerebral palsy with at least nascent contingency awareness. However, establishing robust evidence of the effectiveness of the aforementioned advances will require more expansive studies. © The Author(s) 2014.

  9. Flight evaluation results from the general-aviation advanced avionics system program

    NASA Technical Reports Server (NTRS)

    Callas, G. P.; Denery, D. G.; Hardy, G. H.; Nedell, B. F.

    1983-01-01

    A demonstration advanced avionics system (DAAS) for general-aviation aircraft was tested at NASA Ames Research Center to provide information required for the design of reliable, low-cost, advanced avionics systems which would make general-aviation operations safer and more practicable. Guest pilots flew a DAAS-equipped NASA Cessna 402-B aircraft to evaluate the usefulness of data busing, distributed microprocessors, and shared electronic displays, and to provide data on the DAAS pilot/system interface for the design of future integrated avionics systems. Evaluation results indicate that the DAAS hardware and functional capability meet the program objective. Most pilots felt that the DAAS representative of the way avionics systems would evolve and felt the added capability would improve the safety and practicability of general-aviation operations. Flight-evaluation results compiled from questionnaires are presented, the results of the debriefings are summarized. General conclusions of the flight evaluation are included.

  10. Queuing theory models for computer networks

    NASA Technical Reports Server (NTRS)

    Galant, David C.

    1989-01-01

    A set of simple queuing theory models which can model the average response of a network of computers to a given traffic load has been implemented using a spreadsheet. The impact of variations in traffic patterns and intensities, channel capacities, and message protocols can be assessed using them because of the lack of fine detail in the network traffic rates, traffic patterns, and the hardware used to implement the networks. A sample use of the models applied to a realistic problem is included in appendix A. Appendix B provides a glossary of terms used in this paper. This Ames Research Center computer communication network is an evolving network of local area networks (LANs) connected via gateways and high-speed backbone communication channels. Intelligent planning of expansion and improvement requires understanding the behavior of the individual LANs as well as the collection of networks as a whole.

  11. Implementation of electronic medical records requires more than new software: Lessons on integrating and managing health technologies from Mbarara, Uganda.

    PubMed

    Madore, Amy; Rosenberg, Julie; Muyindike, Winnie R; Bangsberg, David R; Bwana, Mwebesa B; Martin, Jeffrey N; Kanyesigye, Michael; Weintraub, Rebecca

    2015-12-01

    Implementation lessons: • Technology alone does not necessarily lead to improvement in health service delivery, in contrast to the common assumption that advanced technology goes hand in hand with progress. • Implementation of electronic medical record (EMR) systems is a complex, resource-intensive process that, in addition to software, hardware, and human resource investments, requires careful planning, change management skills, adaptability, and continuous engagement of stakeholders. • Research requirements and goals must be balanced with service delivery needs when determining how much information is essential to collect and who should be interfacing with the EMR system. • EMR systems require ongoing monitoring and regular updates to ensure they are responsive to evolving clinical use cases and research questions. • High-quality data and analyses are essential for EMRs to deliver value to providers, researchers, and patients. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Engineering the next generation of clinical deep brain stimulation technology.

    PubMed

    McIntyre, Cameron C; Chaturvedi, Ashutosh; Shamir, Reuben R; Lempka, Scott F

    2015-01-01

    Deep brain stimulation (DBS) has evolved into a powerful clinical therapy for a range of neurological disorders, but even with impressive clinical growth, DBS technology has been relatively stagnant over its history. However, enhanced collaborations between neural engineers, neuroscientists, physicists, neurologists, and neurosurgeons are beginning to address some of the limitations of current DBS technology. These interactions have helped to develop novel ideas for the next generation of clinical DBS systems. This review attempts collate some of that progress with two goals in mind. First, provide a general description of current clinical DBS practices, geared toward educating biomedical engineers and computer scientists on a field that needs their expertise and attention. Second, describe some of the technological developments that are currently underway in surgical targeting, stimulation parameter selection, stimulation protocols, and stimulation hardware that are being directly evaluated for near term clinical application. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Z-2 Prototype Space Suit Development

    NASA Technical Reports Server (NTRS)

    Ross, Amy; Rhodes, Richard; Graziosi, David; Jones, Bobby; Lee, Ryan; Haque, Bazle Z.; Gillespie, John W., Jr.

    2014-01-01

    NASA's Z-2 prototype space suit is the highest fidelity pressure garment from both hardware and systems design perspectives since the Shuttle Extravehicular Mobility Unit (EMU) was developed in the late 1970's. Upon completion it will be tested in the 11' humanrated vacuum chamber and the Neutral Buoyancy Laboratory (NBL) at the NASA Johnson Space Center to assess the design and to determine applicability of the configuration to micro-, low- (asteroid), and planetary- (surface) gravity missions. This paper discusses the 'firsts' the Z-2 represents. For example, the Z-2 sizes to the smallest suit scye bearing plane distance for at least the last 25 years and is being designed with the most intensive use of human models with the suit model. The paper also provides a discussion of significant Z-2 configuration features, and how these components evolved from proposal concepts to final designs.

  14. Z-2 Prototype Space Suit Development

    NASA Technical Reports Server (NTRS)

    Ross, Amy; Rhodes, Richard; Graziosi, David

    2014-01-01

    NASA's Z-2 prototype space suit is the highest fidelity pressure garment from both hardware and systems design perspectives since the Shuttle Extravehicular Mobility Unit (EMU) was developed in the late 1970's. Upon completion it will be tested in the 11' human-rated vacuum chamber and the Neutral Buoyancy Laboratory (NBL) at the NASA Johnson Space Center to assess the design and to determine applicability of the configuration to micro-, low- (asteroid), and planetary- (surface) gravity missions. This paper discusses the 'firsts' the Z-2 represents. For example, the Z-2 sizes to the smallest suit scye bearing plane distance for at least the last 25 years and is being designed with the most intensive use of human models with the suit model. The paper also provides a discussion of significant Z-2 configuration features, and how these components evolved from proposal concepts to final designs.

  15. Modeling reality

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1990-01-01

    Although powerful computers have allowed complex physical and manmade hardware systems to be modeled successfully, we have encountered persistent problems with the reliability of computer models for systems involving human learning, human action, and human organizations. This is not a misfortune; unlike physical and manmade systems, human systems do not operate under a fixed set of laws. The rules governing the actions allowable in the system can be changed without warning at any moment, and can evolve over time. That the governing laws are inherently unpredictable raises serious questions about the reliability of models when applied to human situations. In these domains, computers are better used, not for prediction and planning, but for aiding humans. Examples are systems that help humans speculate about possible futures, offer advice about possible actions in a domain, systems that gather information from the networks, and systems that track and support work flows in organizations.

  16. Improved Calibration through SMAP RFI Change Detection

    NASA Technical Reports Server (NTRS)

    Piepmeier, Jeffrey; De Amici, Giovanni; Mohammed, Priscilla; Peng, Jinzheng

    2017-01-01

    Anthropogenic Radio-Frequency Interference (RFI) drove both the SMAP (Soil Moisture Active Passive) microwave radiometer hardware and Level 1 science algorithm designs to use new technology and techniques for the first time on a spaceflight project. Care was taken to provide special features allowing the detection and removal of harmful interference in order to meet the error budget. Nonetheless, the project accepted a risk that RFI and its mitigation would exceed the 1.3-K error budget. Thus, RFI will likely remain a challenge afterwards due to its changing and uncertain nature. To address the challenge, we seek to answer the following questions: How does RFI evolve over the SMAP lifetime? What calibration error does the changing RFI environment cause? Can time series information be exploited to reduce these errors and improve calibration for all science products reliant upon SMAP radiometer data? In this talk, we address the first question.

  17. Human Factors Checklist: Think Human Factors - Focus on the People

    NASA Technical Reports Server (NTRS)

    Miller, Darcy; Stelges, Katrine; Barth, Timothy; Stambolian, Damon; Henderson, Gena; Dischinger, Charles; Kanki, Barbara; Kramer, Ian

    2016-01-01

    A quick-look Human Factors (HF) Checklist condenses industry and NASA Agency standards consisting of thousands of requirements into 14 main categories. With support from contractor HF and Safety Practitioners, NASA developed a means to share key HF messages with Design, Engineering, Safety, Project Management, and others. It is often difficult to complete timely assessments due to the large volume of HF information. The HF Checklist evolved over time into a simple way to consider the most important concepts. A wide audience can apply the checklist early in design or through planning phases, even before hardware or processes are finalized or implemented. The checklist is a good place to start to supplement formal HF evaluation. The HF Checklist was based on many Space Shuttle processing experiences and lessons learned. It is now being applied to ground processing of new space vehicles and adjusted for new facilities and systems.

  18. GPU-accelerated molecular modeling coming of age.

    PubMed

    Stone, John E; Hardy, David J; Ufimtsev, Ivan S; Schulten, Klaus

    2010-09-01

    Graphics processing units (GPUs) have traditionally been used in molecular modeling solely for visualization of molecular structures and animation of trajectories resulting from molecular dynamics simulations. Modern GPUs have evolved into fully programmable, massively parallel co-processors that can now be exploited to accelerate many scientific computations, typically providing about one order of magnitude speedup over CPU code and in special cases providing speedups of two orders of magnitude. This paper surveys the development of molecular modeling algorithms that leverage GPU computing, the advances already made and remaining issues to be resolved, and the continuing evolution of GPU technology that promises to become even more useful to molecular modeling. Hardware acceleration with commodity GPUs is expected to benefit the overall computational biology community by bringing teraflops performance to desktop workstations and in some cases potentially changing what were formerly batch-mode computational jobs into interactive tasks. (c) 2010 Elsevier Inc. All rights reserved.

  19. Protein crystal growth in microgravity

    NASA Technical Reports Server (NTRS)

    Rosenblum, William M.; Delucas, Lawrence J.; Wilson, William W.

    1989-01-01

    Major advances have been made in several of the experimental aspects of protein crystallography, leaving protein crystallization as one of the few remaining bottlenecks. As a result, it has become important that the science of protein crystal growth is better understood and that improved methods for protein crystallization are developed. Preliminary experiments with both small molecules and proteins indicate that microgravity may beneficially affect crystal growth. For this reason, a series of protein crystal growth experiments using the Space Shuttle was initiated. The preliminary space experiments were used to evolve prototype hardware that will form the basis for a more advanced system that can be used to evaluate effects of gravity on protein crystal growth. Various optical techniques are being utilized to monitor the crystal growth process from the incipient or nucleation stage and throughout the growth phase. The eventual goal of these studies is to develop a system which utilizes optical monitoring for dynamic control of the crystallization process.

  20. Virtual collaborative environments: programming and controlling robotic devices remotely

    NASA Astrophysics Data System (ADS)

    Davies, Brady R.; McDonald, Michael J., Jr.; Harrigan, Raymond W.

    1995-12-01

    This paper describes a technology for remote sharing of intelligent electro-mechanical devices. An architecture and actual system have been developed and tested, based on the proposed National Information Infrastructure (NII) or Information Highway, to facilitate programming and control of intelligent programmable machines (like robots, machine tools, etc.). Using appropriate geometric models, integrated sensors, video systems, and computing hardware; computer controlled resources owned and operated by different (in a geographic sense as well as legal sense) entities can be individually or simultaneously programmed and controlled from one or more remote locations. Remote programming and control of intelligent machines will create significant opportunities for sharing of expensive capital equipment. Using the technology described in this paper, university researchers, manufacturing entities, automation consultants, design entities, and others can directly access robotic and machining facilities located across the country. Disparate electro-mechanical resources will be shared in a manner similar to the way supercomputers are accessed by multiple users. Using this technology, it will be possible for researchers developing new robot control algorithms to validate models and algorithms right from their university labs without ever owning a robot. Manufacturers will be able to model, simulate, and measure the performance of prospective robots before selecting robot hardware optimally suited for their intended application. Designers will be able to access CNC machining centers across the country to fabricate prototypic parts during product design validation. An existing prototype architecture and system has been developed and proven. Programming and control of a large gantry robot located at Sandia National Laboratories in Albuquerque, New Mexico, was demonstrated from such remote locations as Washington D.C., Washington State, and Southern California.

Top